Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results on MPI-INF-3DHP #3

Closed
flyyyyer opened this issue Mar 31, 2021 · 10 comments
Closed

Results on MPI-INF-3DHP #3

flyyyyer opened this issue Mar 31, 2021 · 10 comments

Comments

@flyyyyer
Copy link

flyyyyer commented Mar 31, 2021

Could you provide code on MPI-INF-3DHP?

And how long does it take to train the model?It seems very slow.

@zczcwh
Copy link
Owner

zczcwh commented Mar 31, 2021

@flyyyyer Sorry, I use someone else's code and I don't have permission to redistribute. I have to say the training is very slow. I use 2 RTX 3090 GPUs to train the model. For 27 frames it takes 3-4 days and for 81 frames it takes more than 10 days.

@flyyyyer
Copy link
Author

flyyyyer commented Mar 31, 2021

@flyyyyer Sorry, I use someone else's code and I don't have permission to redistribute. I have to say the training is very slow. I use 2 RTX 3090 GPUs to train the model. For 27 frames it takes 3-4 days and for 81 frames it takes more than 10 days.

Thanks for your reply ! I want to train my model on MPI-INF-3DHP, could you tell me where can I find the released the code ?

@zczcwh
Copy link
Owner

zczcwh commented Mar 31, 2021

@flyyyyer Actually he didn't put the code online. Maybe you can try to find if someone releases the code for preparing MPI-INF-3DHP in their repo.

@Vegetebird
Copy link

@flyyyyer Sorry, I use someone else's code and I don't have permission to redistribute. I have to say the training is very slow. I use 2 RTX 3090 GPUs to train the model. For 27 frames it takes 3-4 days and for 81 frames it takes more than 10 days.

I also find the training is very slow. And how to calculate the FLOPs? I calculate the FLOPs 8× higher than you on Table 6.

@zczcwh
Copy link
Owner

zczcwh commented Mar 31, 2021

@Vegetebird Yes, I'm trying to adapt my model by using some efficient transformer architecture such as Longformer to speed up. For the FLOPs, I just hand calculate the matrix multiplication FLOPs based on the matrix size in the transformer encoder and other components.

@Vegetebird
Copy link

@Vegetebird Yes, I'm trying to adapt my model by using some efficient transformer architecture such as Longformer to speed up. For the FLOPs, I just hand calculate the matrix multiplication FLOPs based on the matrix size in the transformer encoder and other components.

I calculate the FLOPs of your model by 'from thop import profile', it's much higher than you provided in Table 6. I think using thop package get the FLOPs maybe more precise due to the slow traning time and the large size of Transformer (4 layers, 544 dimension).

@zczcwh
Copy link
Owner

zczcwh commented Mar 31, 2021

@Vegetebird Yes, I'm trying to adapt my model by using some efficient transformer architecture such as Longformer to speed up. For the FLOPs, I just hand calculate the matrix multiplication FLOPs based on the matrix size in the transformer encoder and other components.

I calculate the FLOPs of your model by 'from thop import profile', it's much higher than you provided in Table 6. I think using thop package get the FLOPs maybe more precise due to the slow traning time and the large size of Transformer (4 layers, 544 dimension).

I see there is an issue for thop says: "MACs value becomes too large if there are shared modules". (Lyken17/pytorch-OpCounter#98). In our case, the spatial transformer is shared weight. This might be the reason that FLOPs will become too large.

@Vegetebird
Copy link

@Vegetebird Yes, I'm trying to adapt my model by using some efficient transformer architecture such as Longformer to speed up. For the FLOPs, I just hand calculate the matrix multiplication FLOPs based on the matrix size in the transformer encoder and other components.

I calculate the FLOPs of your model by 'from thop import profile', it's much higher than you provided in Table 6. I think using thop package get the FLOPs maybe more precise due to the slow traning time and the large size of Transformer (4 layers, 544 dimension).

I see there is an issue for thop says: "MACs value becomes too large if there are shared modules". (Lyken17/pytorch-OpCounter#98). In our case, the spatial transformer is shared weight. This might be the reason that FLOPs will become too large.

Sorry, I ingore the detail that you report the FLOPs per frame. But why the FLOPs per frame is different with different input squence length?

@zczcwh
Copy link
Owner

zczcwh commented Apr 3, 2021

@Vegetebird the "per frame" means per predicted output frame.

@zczcwh zczcwh closed this as completed Apr 3, 2021
@xobeiotozi
Copy link

@flyyyyer Sorry, I use someone else's code and I don't have permission to redistribute. I have to say the training is very slow. I use 2 RTX 3090 GPUs to train the model. For 27 frames it takes 3-4 days and for 81 frames it takes more than 10 days.

Hello! Could you share the code for training the mpi-inf-3dhp dataset? Thank you! xobeiotozi@gmail.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants