This project develops a convolutional neural network (CNN) to recognize human emotions from facial expressions captured through a webcam. The model is trained on the FER-2013 dataset, which consists of grayscale images labeled across seven emotion categories: Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral.
- Real-time emotion recognition.
- Training with PyTorch on preprocessed image data.
- Use of data augmentation to enhance model robustness.
- Live detection and classification displayed with the predicted emotion and confidence level.
To run this project, you will need the following:
- Python 3.8 or above
- PyTorch 1.7 or above
- OpenCV-Python
- NumPy
- Matplotlib (for plotting training loss)
git clone https://github.com/Zaikur/EmotionRecognition.git cd emotion-recognition
pip install -r requirements.txt
- To build the required .npy files from images collected run: python src/data/load_data.py
- To start the training process, run: python src/models/train_model.py
- To execute the real-time emotion recognition, run: python src/main.py
- To plot the training loss over epochs run: python src/utilities/EpochPlotter.py
- Thanks to the creators of the FER-2013 dataset for providing the data used in this project
- Thanks to the PyTorch team for the comprehensive documentation, and community support