New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--trace_model performance #191
Comments
This feature is under active development and will be improving soon, but you should still see a speedup using the current version. Which version of torch are you using? To get the most out of tracing it's important that you use torch >= 1.12.0. |
Thanks, I'm using 1.12.0+cu116. |
What kind of GPU? |
I'm using an RTX 3060. I just placed the separate fasta files in the input directory and ran run_pretrained_openfold.py on that directory with --trace_model enabled. Is there anything else I need to do? I also had --use_precomputed_alignments enabled, btw. Thanks for the help. |
No, that sounds right. Sit tight until I upload the new version of tracing (should be fairly soon). In the meantime, you can enable |
Sure thing, thanks! |
Hi @gahdritz just curious, any updates on this? Is the |
Whoops forgot to close this. It should work fine now, especially for shorter proteins (< 1000 residues). |
Hi, I have tested using the --trace_model mode on a small batch of sequences of the same length; I get an 80s tracing time followed by 20s inference for each sequence. If I just fold them without --trace_model it takes 18-19s for inference of each. Am I doing something wrong? There doesn't seem to be much documentation about this feature.
The text was updated successfully, but these errors were encountered: