Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem in experiments #24

Closed
daidedou opened this issue Feb 28, 2023 · 5 comments
Closed

Problem in experiments #24

daidedou opened this issue Feb 28, 2023 · 5 comments

Comments

@daidedou
Copy link

Hi, thanks for releasing the pretrained model! I've downloaded it and launched the sample pose experiment. However, it doesn't work on my side (I didn't change anything). Moreover, the loss is stuck at 0.0056, and does not decrease after that. Do you know if I did something wrong? (I used version 2 of the model of course). This are the results I get (Init and output):
init_0
output_0
init_1
output_1

@garvita-tiwari
Copy link
Owner

Are these poses generated by randomly sampling quaternions? For very weird poses, the network might not work, because such drastic poses were not seen in training. However, I still suspect that if you increase the number of steps, poses might get slightly better. If you wish to make the model work for this, then you need to train the network with such poses.

For this you can replace query pose(https://github.com/garvita-tiwari/PoseNDF/blob/main/data/prepare_traindata.py#L139) with this:

    quer_pose_quat = torch.rand((batch_size,21,4))
    quer_pose_quat = torch.nn.functional.normalize(quer_pose_quat,dim=2).to(device=device)
    quer_pose_np = quer_pose_quat.reshape(-1, 84).detach().cpu().numpy() 

@daidedou
Copy link
Author

daidedou commented Mar 6, 2023

Thank you, I think I get the idea. I tested indeed with more classic poses and it seemed to be much better. I will try training it by myself. Do you think I could use pretrained weights as a basis?

@garvita-tiwari
Copy link
Owner

Thank you, I think I get the idea. I tested indeed with more classic poses and it seemed to be much better. I will try training it by myself. Do you think I could use pretrained weights as a basis?

Could you please explain, how do you want to use the pretrained weights as basis?

@daidedou
Copy link
Author

daidedou commented Mar 8, 2023

Sorry, I wanted to initialize PoseNDF with your pre-trained weights and feed it with random poses in a new train loop. I was wondering if this would affect the behavior of the network in the case of close poses (sorry for the approximative English, and again, thank you for taking the time to answer me!).

@garvita-tiwari
Copy link
Owner

garvita-tiwari commented Mar 9, 2023

Sorry, I wanted to initialize PoseNDF with your pre-trained weights and feed it with random poses in a new train loop. I was wondering if this would affect the behavior of the network in the case of close poses (sorry for the approximative English, and again, thank you for taking the time to answer me!).

That will probably not help. In our experiments, we find that training with random poses+ clean poses and then refining with noisy poses(created from AMASS) +clean poses helps. My guess is training in the opposite fashion will not help. But it's worth trying. You can still fine-tune with all the poses(instead of just random poses).I have trained a model with random poses and visualized some results here:
init_0001
out_0001
init_0007
out_0007

You can find the corresponding model here: https://nextcloud.mpi-klsb.mpg.de/index.php/s/EdfTaLaiZindrCe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants