Use MLP and CNN to classify human emotions caused by different images
Image, as the most intuitive information medium, has a significant influence on people's emotions. Studying the categories of human emotions caused by different images can help us obtain a more user-friendly and intelligent interactive experience in the era of big data. Convolutional neural network is the most commonly used deep learning network for image emotion recognition and it has a very good classification performance. This project initially uses multilayer perceptron, a lower-level neural network, to recognize the emotions of images on a small sample scale, makes a comprehensive evaluation over the model accuracy through Cross-validation and analyzes the influence of different multilayer network structures on the accuracy of emotion recognition. The neural network is then applied separately on portrait-only and portrait-excluded datasets to evaluate the model's performance of recognizing emotions of data sets with distinctive features. In order to optimize the network performance, convolutional layers are added to the original model so that higher accuracy is achieved by a higher-level neural network. The construction of neural network used in this project is based on Keras in the TensorFlow virtual environment, which is a very convenient tool for Deep Learning network establishment.