Skip to content

Latest commit

 

History

History
 
 

ferplus

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

FER+

Input

(Image from https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data)

Shape: (1, 1, 64, 64)

Output

  • Estimating emotion
### Estimating emotion ###
 emotion: happiness

Example

happiness
surprise
sadness
anger
disgust
fear
contempt

Usage

Automatically downloads the onnx and prototxt files on the first run. It is necessary to be connected to the Internet while downloading.

For the sample image,

$ python3 ferplus.py

If you want to specify the input image, put the image path after the --input option.

$ python3 ferplus.py --input IMAGE_PATH

If you want to perform face detection in preprocessing, use the --detection option.

$ python3 ferplus.py --input IMAGE_PATH --detection

By adding the --video option, you can input the video.
If you pass 0 as an argument to VIDEO_PATH, you can use the webcam input instead of the video file.
You can use --savepath option to specify the output file to save.

$ python3 ferplus.py --video VIDEO_PATH --savepath SAVE_VIDEO_PATH

By adding the --model_name option, you can specify model name which is selected from "majority", "probability", "crossentropy" "multi_target". (default is majority)

$ python3 ferplus.py --model_name majority

Reference

Framework

MS Cognitive Toolkit

Model Format

ONNX opset = 9

Netron

VGG13_majority.onnx.prototxt
VGG13_probability.onnx.prototxt
VGG13_crossentropy.onnx.prototxt
VGG13_multi_target.onnx.prototxt