Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the experimental setup for human mesh recovery? #49

Open
HospitableHost opened this issue Jun 25, 2023 · 9 comments
Open

What is the experimental setup for human mesh recovery? #49

HospitableHost opened this issue Jun 25, 2023 · 9 comments

Comments

@HospitableHost
Copy link

This table is from your paper.
image
In the paper, you mentioned that "SmoothNet is trained with the pose outputs from SPIN [22]. We test its performance across multiple backbone networks."
But you didn't describe the experimental configuration. So which dataset are you using poses from? And what size is your sliding window here?
Also, did you evaluate the metrics using the test set of the three datasets? Because I run your eval code, but the results of vibe is different from this table.
image
(data from pw3d_vibe_smpl_test.npz and pw3d_gt_smpl_test.npz)(and the vibe results are not affected by smoothnet pre-trained models)

@HospitableHost
Copy link
Author

In addition, I noticed that the data from data/poses/pw3d_vibe_smpl contains 37 sequences. But this does not match with the 3DPW dataset, which has 24 sequences in test_set, 12 sequences in vald_set and 24 sequences in train_set.
image

@ailingzengzzz
Copy link
Contributor

Hi @HospitableHost ,

We used the 3dpw-spin for training. The differences may come from that we removed a few frames at the end of each period that cannot be divided by sliding windows. Did you use our provided code and data for the test?

For 37 sequences, there are some sequences that have two persons. We simply split them into two sequences.

@HospitableHost
Copy link
Author

HospitableHost commented Jun 26, 2023

@ailingzengzzz Hi, I read your code carefully and tested with your code and data. (data/poses/pw3d_vibe_smpl/...) Therefore, I don't know what caused the VIBE results to be different from your paper.
image
image

By the way, I still have three questions:
first, did you use spin---3dpw_train_set for training, right?
second, do the 37 sequences correspond to the 3dpw_test_set?
third, could you please open-source the pretrained model: 3DPW-SPIN-SMPL (not 3DPW-SPIN-3D)

best wishes!

@ailingzengzzz
Copy link
Contributor

Hi @HospitableHost,

The answer to questions 1 and 2 is yes.
We explored the 3DPW-SPIN-SMPL for SMPL (6d rotation matrix) testing but found it is inferior to 3DPW-SPIN-3D (see the paper). Thus, we only provided the 3d model for cross-modality testing.

@HospitableHost
Copy link
Author

HospitableHost commented Jun 28, 2023

Hi @ailingzengzzz
So have you figured out why the results of VIBE are inconsistent with the paper?
I suspect that there is something wrong with the data in the folder data/poses/pw3d_vibe_smpl.

@HospitableHost
Copy link
Author

Besides, I run the evalution of this command :
CUDA_VISIBLE_DEVICES=7 python eval_smoothnet.py --cfg configs/pw3d_spin_3D.yaml --checkpoint data/checkpoints/pw3d_spin_3D/checkpoint_32.pth.tar --dataset_name pw3d --estimator spin --body_representation smpl --slide_window_size 32

The result is also different from your paper.
image
image

@ailingzengzzz
Copy link
Contributor

Hi @HospitableHost ,

The results in Table 3 are calculated via 3D keypoint positions following previous works (e.g., VIBE, PARE, HMR etc). They use a model to estimate SMPL parameters and transform them into 3D keypoint positions for MPJPE, PA-MPJPE, and ACCEL. We simply input the transformed 3D keypoint into SmoothNet to obtain the final 3D keypoint positions.

@donghaoye
Copy link

donghaoye commented Oct 23, 2024

Besides, I run the evalution of this command : CUDA_VISIBLE_DEVICES=7 python eval_smoothnet.py --cfg configs/pw3d_spin_3D.yaml --checkpoint data/checkpoints/pw3d_spin_3D/checkpoint_32.pth.tar --dataset_name pw3d --estimator spin --body_representation smpl --slide_window_size 32

The result is also different from your paper. image image

@HospitableHost
Hi, Have you fixed the above problem?

@HospitableHost
Copy link
Author

@donghaoye
Hi, you could refer to my paper "MoManifold: Learning to Measure 3D Human Motion via Decoupled Joint Acceleration Manifolds" for some details of my fair comparison to SmoothNet.
By the way, if you have some questions, you could send me email or add my wechat.
In fact, their results are different from the paper indeed and some data utilized in their paper somehow lost in their server.
So I just use the released code to report evalution results and I add the footnote in my paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants