Due to the limition of file_size and the off-line operation in this project,we give a cloud disk url https://drive.google.com/drive/folders/1oZEPtZDxBExrJYFTR0dIOawengX3eACX?usp=sharing ,in which the segmented faces,border images and the landmarks stored.
All of the preprocessing about the boder image and segmented face also have been implemented in the data-seg-store.py
With all the pre-calculated data already,AU_Test.py could be used for prediction and the model could be re-trained with Aspect_based_train.py
Same path structure as ./test/ are needed for the ./train/ and ./valid/ if you would like to retrain the model while the landmarks should be detected by a certain tool to which you have access. In addition, the label files need to be add to the path ./Training_Set/, so do the path ./Validation_Set/.
pytoch==1.1.0 numpy==1.16.4 opencv-python==188.8.131.52 Pillow==6.1.0 torchvision==0.4.1
Usage for test
1. Clone the repository
git clone https://github.com/jixianpeng/AU_predictor.git cd AU_predictor
url. Decompress and place the data in them into the corresponding path in this project by the name.2. Download the landmarks,segmented faces and the border images from the
3. Adding the cropped_aligned frames and the original videos to the ./test/cropped_aligned/ and ./test/video/ without any change,Manually.
the results will be stored into ./prediction/, each file corrresponding to a file in the dataset by the name.
After test prediction, original prediction would be stored in the ./prediction/. Then, for smoothing, a option of post-processing could be conducted as following:
The prediction after the operation would be stored in the path ./post_prediction/.