Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal #62

Closed
fermza opened this issue Feb 23, 2022 · 6 comments
Closed

Comments

@fermza
Copy link

fermza commented Feb 23, 2022

Dear Yoshitaka,

I was able to successfully install ColabFold in a machine runnine Ubuntu 20 and a Tesla K40c GPU. Running a prediction is completed, although it takes ~24 hs to be completed (430 residues long), with command
colabfold_batch --amber --templates --num-recycle 3 test.fasta /home/pc08/Desktop/test_AF

During calculation, I get the following messages:
E external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:771] failed to alloc 23994040320 bytes unified memory; result: CUDA_ERROR_OUT_OF_MEMORY: out of memory
W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gemm_algorithm_picker.cc:211] Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal: INTERNAL: All algorithms tried for %cublas-gemm.21 = f32[1908,128]{1,0} custom-call(f32[1908,128]{1,0} %bitcast.319, f32[128,128]{1,0} %bitcast.321), custom_call_target="__cublas$gemm", backend_config="{\"alpha_real\":1,\"alpha_imag\":0,\"beta\":0,\"dot_dimension_numbers\":{\"lhs_contracting_dimensions\":[\"1\"],\"rhs_contracting_dimensions\":[\"0\"],\"lhs_batch_dimensions\":[],\"rhs_batch_dimensions\":[]},\"batch_size\":\"1\",\"lhs_stride\":\"244224\",\"rhs_stride\":\"16384\"}" failed. Falling back to default algorithm.

Let me repeat, that nevertheless, the calculations of the 5 models are completed, and looking good. But I would like to know if this can be improved to run it more efficiently.
Not sure if it's useful, but I've noticed that the GPU is only sporadically used (with GPU's memory used to the max the whole time), and one CPU being used at 100% during most of the run.
If you have any insights on how to resolve this, I would greatly appreciate it. Thanks!

@YoshitakaMo
Copy link
Owner

Can you share the output of nvidia-smi and your CUDA toolkit version (nvcc --version)?

@fermza
Copy link
Author

fermza commented Feb 24, 2022

These are the outputs:

pc08@pc08:~$ nvidia-smi
Thu Feb 24 09:13:42 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.103.01   Driver Version: 470.103.01   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla K40c          Off  | 00000000:01:00.0 Off |                    0 |
| 30%   63C    P0    67W / 235W |  11098MiB / 11441MiB |      3%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A       919      G   /usr/lib/xorg/Xorg                 16MiB |
|    0   N/A  N/A      1446      G   /usr/lib/xorg/Xorg                 67MiB |
|    0   N/A  N/A      1622      G   /usr/bin/gnome-shell               23MiB |
|    0   N/A  N/A      1932      G   /usr/lib/firefox/firefox            9MiB |
|    0   N/A  N/A     48904      C   ...bfold-conda/bin/python3.7      208MiB |
+-----------------------------------------------------------------------------+
pc08@pc08:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_Oct_11_21:27:02_PDT_2021
Cuda compilation tools, release 11.4, V11.4.152
Build cuda_11.4.r11.4/compiler.30521435_0

@YoshitakaMo
Copy link
Owner

11098MiB / 11441MiB

The Memory-usage seems very high. Is any other process using GPU running?

Or, are these issues #22, #27, and #50 helpful to you?

@fermza
Copy link
Author

fermza commented Feb 25, 2022 via email

@fermza
Copy link
Author

fermza commented Mar 7, 2022

Hi Yoshitaka,
I have tried again installing cuDNN for my CUDA 11.4, and running the commands suggested in the issues you suggested
export XLA_FLAGS=--xla_gpu_force_compilation_parallelism=1
export TF_FORCE_UNIFIED_MEMORY="1"
export XLA_PYTHON_CLIENT_MEM_FRACTION="4.0"
export XLA_PYTHON_CLIENT_ALLOCATOR="platform"
export TF_FORCE_GPU_ALLOW_GROWTH="true"

Even though the same warnings were showing in the terminal, I was surprised to see that now the GPU was being used almost at capacity (GPU's memory was also >97% used). However, after a while, the process stopped with the following message:

E external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:1047] could not synchronize on CUDA context: CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure :: *** Begin stack trace ***









	PyDict_SetItem
	_PyModule_ClearDict
	PyImport_Cleanup
	Py_FinalizeEx
	Py_Exit
	
	PyErr_PrintEx
	PyRun_SimpleFileExFlags
	
	_Py_UnixMain
	__libc_start_main
	
*** End stack trace ***

2022-03-07 16:58:22.142478: F external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_executable.cc:127] Check failed: pair.first->SynchronizeAllActivity()

The complete terminal's output in the text file.
Any suggestions on how to proceed?

Thanks,
Fernando
issue_colabfold.txt

@fermza
Copy link
Author

fermza commented Mar 8, 2022

Hello, I have an update for this. Today I ran the same test, but this time WITHOUT the commands indicated in the issues:

export XLA_FLAGS=--xla_gpu_force_compilation_parallelism=1
export TF_FORCE_UNIFIED_MEMORY="1"
export XLA_PYTHON_CLIENT_MEM_FRACTION="4.0"
export XLA_PYTHON_CLIENT_ALLOCATOR="platform"
export TF_FORCE_GPU_ALLOW_GROWTH="true"

Now, the calculations were completed, with the GPU being used at capacity and taking one hour for a ~430 residues model.

Still, the Failed to find best cuBLAS algorithm, GEMM performance might be suboptimal warnings are showing up, and the memory is also at ~97%, but at least the process is completed and much faster than before.

Thanks,
Fernando

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants