New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
details about the embedding dimension #3
Comments
Additionally, could you tell more about the comsumption of the graphic memory when doing inference? It will help me a lot. |
Yes, all the hidden feature have the same dimensional size(128) |
In my implementation, the comsumption of the GPU at inference stage is 3998MiB when the batchsize is 64. I use GTX 1660 to get this result. |
Thanks for your reply and it is very informative! |
All the componets in mmTransformer are trained from scratch. You can try different initialization methods. This initialization method is inherited from the aforementioned repo. FYI, all the weights are initialized with Xaiver, except for the trajectory proposals, which are initialized with orthogonalize initilization. |
Got it! Thanks for your patience and have a nice day XD |
If you have further problems about reproduction, feel free to open a new issue. |
Could you provide the the embedding dimension of each step in motion aggregator and map extractor( with VectorNet)? I haven't found them or correponding reference in Implementation Details in Appendix. Are they same with the hidden state(128)?
The text was updated successfully, but these errors were encountered: