-
Notifications
You must be signed in to change notification settings - Fork 640
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model does not run correctly on CUDA Capability 7.x GPUs #59
Comments
if you're interested in stats, works fine on NVIDIA GeForce RTX 3080, Driver Version: 535.183.01 |
Could you be specific about what was not working on your side on the V100? How to recognize that there is a problem? Is non-sense structure THE indicator of numerical inaccuracies? The mentioned post with non-sense structures on Quadro 4000 and RTX 2060S were not done in your docker environment... Because on my side, predictions look perfect on an old Quadro P3000 6GB for several proteins and ligand complexes (i.e. on a 6 years-old Thinkpad laptop and a mobile GPU with compute < 8.0),. Also works great on RTX-3090. Other than non-sense structures, what other observation could indicate that we have numerical inaccuracy? Is there a controlled test we could do to identify potential numerical inaccuracy in our setup? |
The nonsense structure is the indicator of the problem here - output will look almost random. The problem appears related to bfloat16, which is not supported on older GPU. We will continue to investigate next week. Interesting to know that it does work on some older GPU, thanks for the report. Even if the major issue under investigation here isn't present, please note we have not done any large scale numerical verification of outputs on devices other than A100/H100. |
Thank you for the precision @joshabramson. I will watch for "exploded" structures, and report the specifics if ever it happens on one of my GPUs. The P3000 definitely does not support natively BF16 (CUDA capability 6.1). I guess it emulates it via float32 compute. Since it is quite probable that several people will try to run AF3 on their available hardware, here are some details of my setup where it works perfect so far. Number of tokens (12 runs so far on that GPU) : 167-334 tokens, so largest bucket size tested was 512. Largest test: Typical inference speed for < 256 tokens : 150-190 seconds per seed (so typically less than 3 minutes for < 256 tokens) GPU : Quadro P3000, Pascal architecture, Computer Capability = 6.1 (ThinkPad P71 laptop) Docker : default setup, NOT using unified memory
nvidia-smi
nvcc -V
deviceQuery
neofetch
|
We ran the "2PV7" example from the docs on all GPU models available on our cluster with the following results:
Specifically, a ranking score of -99 corresponds to noise/explosion, and a ranking score of 0.67 corresponds to a visually compelling output structure. Update (20.11): added driver/cuda versions reported by nvidia-smi. |
Thanks @jurgjn, this is incredibly useful information! These are the GPU capabilities (see https://developer.nvidia.com/cuda-gpus) for the GPUs mentioned:
Looks like anything with GPU capability < 8.0 produces bad results. |
Just to add one more piece of info, I am using a RTX A6000 (capability 8.6) and so far all looks well. |
RTX A5000 (capability 8.6) works well too |
Could more people test with capability 6.x? Based on the result above from @smg3d, it looks that maybe only capability 7.x is broken, while 6.x (and >8.0) might be fine. I.e. current theory:
|
I wonder if it could be a driver effect? I noticed several people are mentioning they are using older driver. Might be useful to know which driver and Cuda @jurgjn was using on his system. I was using Driver 560.35.03 and Cuda V12.6.77 (Actually just upgraded to driver 565 today). |
I could now try AF3 on a Quadro P4000 (Pascal) and like @smg3d reported for P3000, on this GPU it works. This test was done with the same driver and cuda versions (565.57.01, cuda_12.5.r12.5) as the tests on RTX 2060S (Turing) and Quadro RTX 4000 (Turing). |
V100 also meet "exploded" structures NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
Quadro RTX 8000 also got exploding structures Driver Version 555.42.06, CUDA Version 12.5 |
I can confirm that it runs well on P100 (capability 6.0). So far it has been confirmed that it runs well on the following 6.x Capability:
And so far there has been no reports of "exploded structures" on 6.x Capability. |
- Add explicit check for compute capability < 6.0 - Keep check for range [7.0, 8.0) - Update error message to clarify working versions (6.x and 8.x) - Addresses issue google-deepmind#59
I think it would be good for users to be able to use AlphaFold3 on Pascal GPUs (without requiring them to modify code). The data on this issue strongly suggest that the "exploded structures" problem does not affect Pascal GPUs (compute capability 6.x). Moreover, there are still several clusters with P100s, and these often have 0 or very short wait time (compared to the A100s). For example, on one of the Canadian national clusters, AF3 jobs on P100 currently start immediately, whereas jobs on the A100 (on the same cluster) often have 10-30 minutes wait time in the queue. So for a single inference job on small-medium size protein complexes, we get our predictions back much faster with the P100, despite the inference being ~5x slower (358 sec vs 73 sec on the tested dimer). I tested and submitted a small PR to allow Pascal GPUs to run without raising the error message. |
I got a nice looking structure for the 2PV7 example on an
|
Following up on the previous comment, I ran some docking simulations on our old cluster, which is a mix of "RTX 2080 Ti" and "GTX 1080 Ti" nodes. All the ~20 jobs on the 1080s worked OK, all of the ~20 jobs on the 2080s gave exploded structures and ranking_scores of -99. Looks like the 2080s have compute capability 7.5 and the 1080s have compute capability 6.1, so this fits with the "7.0 <= CC < 8.0 is bad" theory. |
Thanks for all the reports and suggestions here. Update from our side: We identified where the issue with bfloat16 V float32 is for V100, after fixing that structures are no longer exploded, but
We are investigating these issues with the XLA team, but in the meantime we do not believe V100s are safe to use even without exploding structures. We also tested P100s, which have capability less than 7, and there we can run without any changes (other than turning flash attention implementation to 'xla') up to 1024 tokens, and with no regression in accuracy compared to A100. However, given the issues we see on V100, we have reservations about removing any restrictions on gpu versions just yet. Users are free to remove the hard error from the code themselves. |
Awesome, thank you for the update and for digging into this! Is it easy to say what the bfloat16 "partial fix" for V100s is? In case we wanted to try doing some testing on other 7<=CC<8 GPUs? |
The partial fix is to convert any bfloat16 params to float32 directly after loading them, and to set |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Originally I came here to make sure the code / model can run distributed over 2 or more GPUs, because my 2 RTX 4000 Ada "only" have 20GB each. (combined the 40GB of an A100) Remains the question if we could please update the installation documentation to be less intimidating by mentioning lower end hardware and not only $30.000+ irons, e.g. for students to get their feet wet. Thanks |
Updates (and some good news):
|
Does anyone know if inference works on the Apple M4 chip? (Or any Apple M series GPU, for that matter.) |
I appreciate all the helpful resources on this thread.
Thank you!! |
Hi @OrangeyO2,
A40 has compute capability 8.6 and uses the Ampere architecture (A100 also uses Ampere) so it should be fine. That being said, we haven't done any large-scale tests on that particular GPU type.
P100 is compute capability 6.0 (Pascal), A40 is 8.6 (Ampere), L40 is 8.9 (Ada Lovelace). As such, I would recommend A40 or L40 as they will be significantly faster than the P100. They are likely to be ok, but I recommend you run some accuracy tests. |
Thanks for the info and for keeping us updated with the status! Are there some generic accuracy tests one can run on different GPU types (that were not specified abode) to make sure that this V100 issue is not taking place? Does the V100 issue basically lead to random-looking output no matter the input, or just in specific cases? |
@Augustin-Zidek |
Has anyone succeeded on Tesla T4 (capability 7.5). Driver Version: 550.54.15,CUDA Version: 12.4 . --flash_attention_implementation=xla Is there any way to keep it from predicting "exploded structures" |
I did test it on our T4 cluster (with CUDA 12.6 and |
Please avoid the partial fix mentioned above if possible as it can give less accurate output than expected. We are working on a complete fix and will update on timelines very soon. |
Hi, thanks again for this great tool - is there any news regarding how us users can make sure that our GPUs are ok accuracy-wise? Is the issue discussed here related just to random/obviously wrong structures predicted? Or is this GPU-accuracy-issue more nuanced than that? I am looking for a benchmark to verify the validity of different GPU models |
We are pretty sure CUDA capability 7 GPUs all face the same issue, and should not currently be used. CUDA capability 6 or >=8 are fine. As per comments above, there are some hacks that can move away from exploding structures for cc 7 gpus, but then numerical accuracy is not on par with what we expect. Please await the full fix for cc 7 gpus, which is coming soon. |
Great, thanks @joshabramson |
A note from us at Google DeepMind:
We have now tested accuracy on V100 and there are serious issues with the output (looks like random noise). Users have reported similar issues with RTX 2060S and RTX Quadro 4000.
For now the only supported and tested devices are A100 and H100.
The text was updated successfully, but these errors were encountered: