You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering exactly how you pre-processed the images. I am training the model with the BP4D dataset and I am not achieving the same results in the paper (about 0.1 off the ICC). I used dlib to estimate the 68 facial landmarks and then resized/cropped to 256x256. I also switched k = 7 to k = 5 in the model_graph.py file. I think the issue is with pre-processing since I am not matching the AU centers from the BP4D training examples provided. However, I think my calculations for the AU centers from the facial landmarks are correct.
Thanks
The text was updated successfully, but these errors were encountered:
I was wondering exactly how you pre-processed the images. I am training the model with the BP4D dataset and I am not achieving the same results in the paper (about 0.1 off the ICC). I used dlib to estimate the 68 facial landmarks and then resized/cropped to 256x256. I also switched k = 7 to k = 5 in the model_graph.py file. I think the issue is with pre-processing since I am not matching the AU centers from the BP4D training examples provided. However, I think my calculations for the AU centers from the facial landmarks are correct.
Thanks
Yes, the problem might be with the data pre-processing. When cropping the images, affine transformations were also applied to register all images. The images were registered according to the nose and mouth positions, similar to the operation described in [1].
[1] Joint Action Unit localisation and intensity estimation through heatmap regression. BMVC 2018.
I am not allowed to release the code of the affine transformation currently. You may use MTCNN or other face alignment implementations to crop the images.
Hi,
I was wondering exactly how you pre-processed the images. I am training the model with the BP4D dataset and I am not achieving the same results in the paper (about 0.1 off the ICC). I used dlib to estimate the 68 facial landmarks and then resized/cropped to 256x256. I also switched k = 7 to k = 5 in the model_graph.py file. I think the issue is with pre-processing since I am not matching the AU centers from the BP4D training examples provided. However, I think my calculations for the AU centers from the facial landmarks are correct.
Thanks
The text was updated successfully, but these errors were encountered: