Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can you share your Manually_Annotated_file cvs files? #10

Closed
Dian-Yi opened this issue Mar 8, 2022 · 10 comments
Closed

can you share your Manually_Annotated_file cvs files? #10

Dian-Yi opened this issue Mar 8, 2022 · 10 comments

Comments

@Dian-Yi
Copy link

Dian-Yi commented Mar 8, 2022

I test affectnet validation data, but get 0.5965 using enet_b2_8.pt.
can you share Manually_Annotated_file validation.csv and training.csv to me for debug?

@av-savchenko
Copy link
Owner

I'm pretty sure that I cannot distribute parts of AffectNet including these csv files. I did not change anything in the authors' version of these files. First of all, please check other models if they are also worth than mine. Maybe you just forgot to set USE_ENET2=True, so that the preprocessing is not appropriate. BTW, the faces from the dataset are extracted using the script in the beginning of train_emotions.ipynb when images are saved into directories AFFECT_TRAIN_DATA_DIR and AFFECT_VAL_DATA_DIR

@Dian-Yi
Copy link
Author

Dian-Yi commented Mar 8, 2022

I'm pretty sure that I cannot distribute parts of AffectNet including these csv files. I did not change anything in the authors' version of these files. First of all, please check other models if they are also worth than mine. Maybe you just forgot to set USE_ENET2=True, so that the preprocessing is not appropriate. BTW, the faces from the dataset are extracted using the script in the beginning of train_emotions.ipynb when images are saved into directories AFFECT_TRAIN_DATA_DIR and AFFECT_VAL_DATA_DIR

can you tell me your torch and python version?

@av-savchenko
Copy link
Owner

I have python 3.8 and pytorch 1.7.1. More important, timm is required to be 0.4.5. I do not know how it can help, if you have some inconsistencies with versions, you simply cannot load the model

@Dian-Yi
Copy link
Author

Dian-Yi commented Mar 8, 2022

I have python 3.8 and pytorch 1.7.1. More important, timm is required to be 0.4.5. I do not know how it can help, if you have some inconsistencies with versions, you simply cannot load the model
I change torch version and USE_ENET2=True, but get val_acc is 0.6145 uising enet_b2_8.pt. It is lower than your results, i cant find what is error.
So can you let your file compare with my validation csv and tell me if they are the same.
validation.csv
Thanks you very much.

@av-savchenko
Copy link
Owner

av-savchenko commented Mar 8, 2022

Yes, validation.csv are identical. You can check that the facial images are identical, please take a look at one example here
006dbcccdcd992be19ab3a5751c24bcaf50ecb33d8ec781ae6d3f5c0

@Dian-Yi
Copy link
Author

Dian-Yi commented Mar 8, 2022

Yes, validation.csv are identical. You can check that the facial images are identical, please take a look at one example here 006dbcccdcd992be19ab3a5751c24bcaf50ecb33d8ec781ae6d3f5c0

I change IMG_SIZE=300, the val_acc is 0.6255 without aligned uising enet_b2_8.pt, It is little lower than your results 63.025.
So can you tell me your best IMG_SIZE?

@Dian-Yi
Copy link
Author

Dian-Yi commented Mar 8, 2022

def rotate_image(image, facial_landmarks):
    landmarks=np.array([float(l) for l in facial_landmarks.split(';')]).reshape((-1,2))
    left_eye_x=left_eye_y=right_eye_x=right_eye_y=0
    for (x,y) in landmarks[36:41]:
        left_eye_x+=x
        left_eye_y+=y
    for (x,y) in landmarks[42:47]:
        right_eye_x+=x
        right_eye_y+=y
    left_eye_x/=6
    left_eye_y/=6
    right_eye_x/=6
    right_eye_y/=6
    theta=math.degrees(math.atan((right_eye_y-left_eye_y)/(right_eye_x-left_eye_x)))
    image_center = tuple(np.array(image.shape[1::-1]) / 2)
    rot_mat = cv2.getRotationMatrix2D(image_center, theta, 1.0)
    result = cv2.warpAffine(image, rot_mat, image.shape[1::-1], flags=cv2.INTER_LINEAR)
    #print(left_eye_x, left_eye_y, right_eye_x, right_eye_y,theta,img.shape)
    return result

In rotate_image function, landmarks[36:41] and landmarks[42:47] the length of array is 5. so they shoudle be [36:42] and [42:48]
if you want to using left_eye_x/6. .......

@av-savchenko
Copy link
Owner

I do not rotate image to get accuracy 63%. It is necessary to feed the cropped images without additional pre-processing

@Dian-Yi
Copy link
Author

Dian-Yi commented Mar 8, 2022

I do not rotate image to get accuracy 63%. It is necessary to feed the cropped images without additional pre-processing
Thanks very much, you are right, cropped images is necessary and i get acc 62.9%.

@av-savchenko
Copy link
Owner

Ok. It is still strange that we got slightly different results, but I will close the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants