New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda/Pytorch/Installation Issues #172
Comments
Try uninstalling PyTorch from your conda environment and then manually reinstall it using the instructions on the website here: https://pytorch.org/get-started/locally/. LMK if that helps. |
Ok so I got a fresh install set up and tried to use the pytorch installation for Cuda 11.3. I received the following error:
|
Are you sure that your CUDA version isn't actually 10.1? |
Unfortunately, yes. Running torch.version.cuda returns 11.3 and there is no evidence of 10.1 on this machine. |
Not even in your virtual environment anywhere? The error message does confirm that it knows that your torch CUDA is 11.3. |
I saw the same problem, namely, ""runtimeerror: Cuda error: no kernal image is available for execution on the device" when I run run_pretrained_openfold.py. I am using Azure GPU and follows installation (Linux). No code has been changed. |
Not that I can see anywhere. After purging the machine of all conda stuff to try and avoid this conflict, the only things I have done are run the install_third_party_dependencies.sh and the torch command you shared. Searching through the packages in the lib does not return any results for versions of cudatoolkit other than for 11.3 |
Any solution to this issue @gahdritz I am stuck here completely for my projects, any help will be appreciated. |
Could you send the output of Next, could you try downgrading to torch 1.10.1 and re-running |
Since this problem is very common to many people, I would like to give my detailed investigation and hopefully it will help to fix the problem soon. I tested on two GPUs (Tesla K80 and Tesla M60) on with Microsoft Azure Machine Learning Studio. I observed the same problem on both GPUs. Here is the GPU info. I tested both main and v1.0.0 branch. The issues are different. However, both issues have been reported on this thread and on #161. On v1.0.0, I observed something similar to # 161. On main, I observed what people reported here. I tested with and without Docker container. The results are the same. Here is Python and PyTorch information. For main The command line input with docker containers are There is no error message for v1.0.0 since both relax and no-relax pdb file has been produced. However, the pdb file is garbage as shown in the following image. @gahdritz Let me know if you need more information and how can I help to fix this problem. |
Interesting---this seems to be the first time this is happening on non-Pascal GPUs. I still can't reproduce this @bing-song, so I'll need some extra help here, if you don't mind. in Thanks btw for putting this all together! |
@gahdritz Here is the prints that I added around line 53 for main branch Here is the output (Not sure why END is not printed). The device cuda:0 is the correct one. |
What happens if you put torch.cuda.synchronize() right before that matmul, below the custom kernel call? So strange that the kernel executes multiple times without crashing... |
@gahdritz Here is the fasta file for this test.
If you have a fasta file that you want me to try on my machine, let me know. |
I don't think this has to do with any particular input sequence, since I can't reproduce this on my machine. One last thing, if you don't mind: could you try running it with that CUDA flag (CUDA_LAUNCH_BLOCKING) mentioned in the error message set to 1? |
@gahdritz I am thinking about to open a ssh for my Azure GPU server for you to debug. Do you think this will help? |
Yeah that would be great actually. |
@gahdritz Can you let me know how to give you the ssh login info? |
Send it to my Gmail, which is just my GitHub username. |
I think I resolved this in 6c89015. @lzhangUT, @bing-song, @epenning could you verify that the inference script works on your systems now? |
@gahdritz Just tried. I checked out the openfold main branch and rebuild docker container and run the inference with the precomputed MSA alignment data. I have exact the same error. |
Could you try without docker? |
@gahdritz , it is working well without docker. The predicted structure is good compared with the electron microscopy. |
Excellent. I've since pushed a fix that should work for Docker. Could you give it a try? If that still doesn't work, could you change |
You did the edit slightly wrong---you should replace |
Yes exactly. |
Yes, but did you manage to capture the output of the |
Right. Hm. The GPU must not be visible at that stage of the container's construction for whatever reason. As a sanity check, could you enter the resulting container, delete |
Yes, that works and make sense. However, it is not a fix. As I understand, the GPU information is not available during docker image build. It only available during create a docker container. This is the reason you need |
Yes this is unfortunate. Maybe the approach I took of dynamically determining the right GPU architectures to compile for fundamentally doesn't work in this case. Is there any alternative to hard-coding in a bunch of additional architectures, slowing down the build for everyone else? Perhaps I could look for a GPU, and, if one is found, remove other architectures from a long, hardcoded list that would be used otherwise. I need to think about this. |
Ok @bing-song I did the thing in the previous comment. Check out f3814c9. It should now compile kernels for 3.7 and other CC's by default. |
I changed the VM from Tesla P40 to V100, now the inference worked fine. |
@gahdritz I confirmed that the installation is working on both Docker and ENV for Azure K80. |
I just want to follow up on this comment by @Cweb118 , since I had the exact same issue as them (version mismatch) but not the issues brought up by others in this thread. My local install nvcc did not match what the |
Hello! So I have been struggling with a strange issue that I hope you or someone would be able to help me with. Let me start by providing some information:
So I am not sure if this is a problem with how I am attempting to install openfold, or if something else is going on. Essentially after cloning the repo the first thing I would do is run scripts/install_third_party_dependencies.sh. This would then create an environment called openfold_venv, however this environment does not seem to withhold many of the required packages (i.e. torch is absent). Following this with scripts/activate_environment.sh seems to fail. I have tried alternatively used conda env create -f environment.yml, which sets up an environment in a different location. Either way, after setting up the environment I end up with one of the following issues, either during python setup.py install or during inference:
PyTorch (11.2). Please make sure to use the same CUDA versions." (despite torch.version.cuda returning 11.3)
These are run into on clean installs with no conda or cudatoolkits installed anywhere else on the machine, so it is rather puzzling. As I said I am not sure if this is due to performing the install sequence incorrectly but I have tried several different solutions and they all seem to circle back to one of these errors.
I apologize as I know this is rather vague, but if you can offer any sort of guidance it would be greatly appreciated!
The text was updated successfully, but these errors were encountered: