Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PoseTrack18 results? #1

Closed
ybai62868 opened this issue Jan 9, 2019 · 9 comments
Closed

PoseTrack18 results? #1

ybai62868 opened this issue Jan 9, 2019 · 9 comments

Comments

@ybai62868
Copy link

Hi, you write very good code and recently I am working on PoseTrack (including 3 tasks) too.
I use the msra's pytorch code (simple baseline) finetune the PoseTrack 2018 dataset and all parameters I use according to the paper.
Using gt_box on PoseTrack2018 dataset for Human Pose Estimation, I get the results which is worse than you reported:
my eval results:

Loading data
('# gt frames :', 3902)
('# pred frames:', 3902)
Evaluation of per-frame multi-person pose estimation
('saving results to', './out/total_AP_metrics.json')
Average Precision (AP) metric:
& Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total\
& 49.4 & 89.2 & 83.9 & 76.1 & 81.0 & 80.3 & 74.6 & 74.6 \

Can you teach me how to finetune the model and I want to get the same results as you!
Thank you!
BTW, do you plan to implement the tracking code in this paper(simple baseline for human pose estimation and tracking)? I am doing this section ...

@mks0601
Copy link
Owner

mks0601 commented Jan 9, 2019

To train a model on the PoseTrack 2018 dataset, I firstly trained a model on the COCO dataset. You can download the pre-trained model in the README. After that, renamed them from snapshot_140* to snapshot_0* and placed them at output/model_dump/PoseTrack/. Finally, I ran python train.py --gpu 0-1 --continue after changing dataset name to the PoseTrack in config.py. No hyperparameters changed from the config.py.

Unfortunately, I do not have a plan to implement tracking part :(

@ybai62868
Copy link
Author

@mks0601 Thank you for your detailed replay! After simply debug, I find that I inverted head top and head bottom horizontally by mistake. Which looks like very stupid. After a new training ...
I gets the results as follow:

Loading data
('# gt frames :', 3902)
('# pred frames:', 3902)
Evaluation of per-frame multi-person pose estimation
('saving results to', './out/total_AP_metrics.json')
Average Precision (AP) metric:
& Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total\
& 88.0 & 89.2 & 84.1 & 76.2 & 81.1 & 80.5 & 75.0 & 82.4 \

@mks0601
Copy link
Owner

mks0601 commented Jan 12, 2019

Sounds good!

@mks0601 mks0601 closed this as completed Jan 12, 2019
@lxtGH
Copy link

lxtGH commented Sep 24, 2019

@mks0601 Hi!! pose track dataset have 15 joints(2 joints are not annotated). How did you handle this?

@mks0601
Copy link
Owner

mks0601 commented Sep 24, 2019

Just use pre-trained model on COCO, and set loss zero for not annotated l_ear and r_ear.

@Vhunon
Copy link

Vhunon commented Feb 23, 2020

Where did you set the loss to zero in the code?

@mks0601
Copy link
Owner

mks0601 commented Feb 23, 2020

I can't remember clearly, but
Please check L86 of main/gen_batch.py.
joints[:,2] of those joints would be zero.

@Vhunon
Copy link

Vhunon commented Feb 27, 2020

I couldn't figure out how to set these 2 keypoints to 0 loss.
But I think its this line?
Also here:
loss = tf.reduce_mean(tf.square(heatmap_outs - gt_heatmap) * valid_mask)

@mks0601
Copy link
Owner

mks0601 commented Mar 5, 2020

I think so. The valid mask is generated from joints[:,2]. The mask values would be zero for not annotated joints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants