New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do i need h36m data to run inference on internet data? #5
Comments
Hi @ChristianIngwersen, thanks for your interest! |
Thanks for the quick reply @syguan96 ! |
I tried using this command, and it works well.
Have you changed the code? From the error, check this code |
Haven't changed anything. |
Modified the adaptation step to check for gradients and it passed the assertions but chrashes with the same error when calling Modifications:
|
I just notice that the problem might be caused by the installed PyTorch. Have you checked the reason for causing the CUDA error? |
My environment is Pytorch 1.8.1 with CUDA 11.1+ and test on 3080. |
I'm on Pytorch 1.8.2 with CUDA 11.1+ and test on a 2080 I completely followed the guide to set up a new env with 1.8.2 as mentioned in the readme. Can try to downgrade and see if it will fix it :) |
Sorry, I didn't check this detail carefully. |
No worries! :) While downgrading, In the alphapose step you suggest to use: This will run on one video at a time right? I ran it on a single vid, and then changed the |
the structure is
|
Solved after reinstalling with conda instead of pip. |
I'm glad to hear this news. Thanks for your contribution to improving the quality of this repo! |
Hi,
Thanks for your great work!
Do I really need to download the entire h36m dataset in order to run the demo on internet data?
Following your guide I run into issues in the lower level adaptation that takes a h36m batch.
Is this on purpose or should it be changed?
Here
lower_level_loss, _ = self.lower_level_adaptation(image, gt_keypoints_2d, h36m_batch, learner)
The text was updated successfully, but these errors were encountered: