You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The RGB image (input_img) is used for dlib (it assumes RGB images).
The input to the age-gender-estimation model is BGR images (cropped from img not input_img; yes, it's confusing).
It seems like you're feeding RGB images to the Net in demo
https://github.com/yu4u/age-gender-estimation/blob/master/demo.py#L83
But looks like it is trained on BGR default opencv images:
https://github.com/yu4u/age-gender-estimation/blob/master/create_db.py#L55
Please clarify. And, btw, what are the image size you used for pretrained model provided (weights.28-3.73.hdf5)? Thank you in advance.
The text was updated successfully, but these errors were encountered: