You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm focusing on the fine-tuning work of img2pose。I refer to the steps you suggested on issues in github to fine tuned based on the model“img2pose_v1.pth”you provided. However,the head pose estimation were worse after the fine-tuning than before. my steps are as follows:
1、use the 300wlp annotations "300W_LP_annotations_train.txt" you provided in github and dowload 300wlp datasets on the official website.
2、use the json_loader_300wlp.py in your codes to create "300W_LP_annotations_train.lmdb", "300W_LP_annotations_train_pose_mean.npy" and "300W_LP_annotations_train_pose_stddev.npy" by execute "convert_json_list_to_lmdb.py"
3、in models.py change rpn_batch_size_per_image to 2 (proposals), and box_detections_per_img to 4 (head samples)。
4、 use "300W_LP_annotations_train.lmdb" to train without augmentations, use "300W_LP_annotations_train_pose_mean.npy" and "300W_LP_annotations_train_pose_stddev.npy" as pose_mean and pose_stddev.
5、 lr = 0.001 for 2 epochs
6、Other parameters are set as follows:
"--pose_mean", "./datasets/lmdb/WIDER_train_annotations_pose_mean.npy",
"--pose_stddev", "./datasets/lmdb/WIDER_train_annotations_pose_stddev.npy",
"--pretrained_path", "./models/img2pose_v1.pth",
"--workspace", "./workspace/",
"--train_source", "./datasets/lmdb/300W_LP_annotations_train.lmdb",
"--prefix", "trial_1",
"--batch_size", "2",
"--max_size", "1400",
however, after 2 epochs,the head pose estimation were worse , and the MAE in aflw2000 is "Yaw: 20.656 Pitch: 17.178 Roll: 13.957 MAE: 17.264; H. Trans.: 0.179 V. Trans.: 0.363 Scale: 1.465 MAE: 0.669".
I don't know which step went wrong. I would appreciate it if you could help me.
The text was updated successfully, but these errors were encountered:
Sorry for the late reply. Could you please try the same thing you did, but instead of using pose mean and stddev from 300W-LP, use the same one as you used for training (WIDER)?
I'm focusing on the fine-tuning work of img2pose。I refer to the steps you suggested on issues in github to fine tuned based on the model“img2pose_v1.pth”you provided. However,the head pose estimation were worse after the fine-tuning than before. my steps are as follows:
1、use the 300wlp annotations "300W_LP_annotations_train.txt" you provided in github and dowload 300wlp datasets on the official website.
2、use the json_loader_300wlp.py in your codes to create "300W_LP_annotations_train.lmdb", "300W_LP_annotations_train_pose_mean.npy" and "300W_LP_annotations_train_pose_stddev.npy" by execute "convert_json_list_to_lmdb.py"
3、in models.py change rpn_batch_size_per_image to 2 (proposals), and box_detections_per_img to 4 (head samples)。
4、 use "300W_LP_annotations_train.lmdb" to train without augmentations, use "300W_LP_annotations_train_pose_mean.npy" and "300W_LP_annotations_train_pose_stddev.npy" as pose_mean and pose_stddev.
5、 lr = 0.001 for 2 epochs
6、Other parameters are set as follows:
"--pose_mean", "./datasets/lmdb/WIDER_train_annotations_pose_mean.npy",
"--pose_stddev", "./datasets/lmdb/WIDER_train_annotations_pose_stddev.npy",
"--pretrained_path", "./models/img2pose_v1.pth",
"--workspace", "./workspace/",
"--train_source", "./datasets/lmdb/300W_LP_annotations_train.lmdb",
"--prefix", "trial_1",
"--batch_size", "2",
"--max_size", "1400",
however, after 2 epochs,the head pose estimation were worse , and the MAE in aflw2000 is "Yaw: 20.656 Pitch: 17.178 Roll: 13.957 MAE: 17.264; H. Trans.: 0.179 V. Trans.: 0.363 Scale: 1.465 MAE: 0.669".
I don't know which step went wrong. I would appreciate it if you could help me.
The text was updated successfully, but these errors were encountered: