Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster pytorch batched version. #11

Merged
merged 2 commits into from Feb 2, 2019
Merged

Commits on Feb 1, 2019

  1. Faster pytorch batched version.

    I made this very fast pytorch implementation of your SMPL model for my work with the SMIL model.
    It took me all of monday, and then it turned out you had already made a pytorch version when I was done with my code. :)
    I believe this should work with both the SMPL and SMIL model. It is very very fast on my computer, only limited by the memory, and it works with sparse tensors too! (Saving a lot of said memory)
    I hope my code is not too messy, I don't have time to clean it up but I hope it is helpful to you.
    Try it out!
    
    Thank you so much for your work, it helped me a lot.
    SMIL: https://www.iosb.fraunhofer.de/servlet/is/82920/
    
    If you have any questions feel free to ask.
    Best regards, Sebastian.
    sebftw committed Feb 1, 2019
    Copy the full SHA
    43128ff View commit details
    Browse the repository at this point in the history
  2. Allow to run from state dict.

    Now it is made possible to give None as model_path, which means it should not load the model.
    Then you can instead load from a state dict. This state dict must contain 'kintree_table', but otherwise partial loading is also possible.
    
    See https://pytorch.org/tutorials/beginner/saving_loading_models.html for examples.
    sebftw committed Feb 1, 2019
    Copy the full SHA
    fe1167c View commit details
    Browse the repository at this point in the history