Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

调用的dll中发生错误 #23

Closed
zeta-zl opened this issue Mar 14, 2023 · 4 comments
Closed

调用的dll中发生错误 #23

zeta-zl opened this issue Mar 14, 2023 · 4 comments

Comments

@zeta-zl
Copy link

zeta-zl commented Mar 14, 2023

报错:AttributeError: C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_30_9.dll: undefined symbol: cudaDeviceGetAttribute
还有提示:Symbol cudaGetErrorName not found in C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_30_9.dll
Symbol cudaPeekAtLastError not found in C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_30_9.dll

我尝试过重新下载dll,但是不起作用。不论是否量化都会报错,使用的是NVIDIA GeForce GTX 1060

@skirodev
Copy link

First, make sure you have installed Torch dependencies with GPU support. Check by running the code below.

import torch

print(torch.__version__)  
print(torch.cuda.is_available())

print(torch.__version__): You will get the version information of Torch.

print(torch.cuda.is_available()): The output must be true. If it's false, then you need to uninstall the currently installed Torch and visit the PyTorch website to reinstall Torch with GPU support.

Secondly, reinstalling CUDA and CuDNN may help you solve the problem.

@lwh9346
Copy link

lwh9346 commented Mar 15, 2023

我也遇到了同样的问题,这是我的系统信息:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.29                 Driver Version: 531.29       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4080       WDDM | 00000000:01:00.0  On |                  N/A |
| 30%   43C    P8               26W / 320W|   1315MiB / 16376MiB |      7%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
print(torch.__version__) #1.13.1+cu116
print(torch.cuda.is_available()) #True

报错信息如下:

Traceback (most recent call last):
  File "D:\Python\ChatGLM-6B\cli_demo.py", line 6, in <module>
    model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 455, in from_pretrained
    model_class = get_class_from_dynamic_module(
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\transformers\dynamic_module_utils.py", line 363, in get_class_from_dynamic_module
    final_module = get_cached_module_file(
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\transformers\dynamic_module_utils.py", line 274, in get_cached_module_file
    get_cached_module_file(
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\transformers\dynamic_module_utils.py", line 237, in get_cached_module_file
    modules_needed = check_imports(resolved_module_file)
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\transformers\dynamic_module_utils.py", line 129, in check_imports
    importlib.import_module(imp)
  File "C:\Users\lwh93\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\__init__.py", line 2, in <module>
    from .kernels import *
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\kernels\__init__.py", line 1, in <module>
    from .embedding import embedding_forward, embedding_backward_stage1, embedding_backward_stage2, embedding_step
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\kernels\embedding.py", line 1, in <module>
    from .base import Kernel, DevicePointer, CUDAStream, round_up
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\kernels\base.py", line 5, in <module>
    from ..device import Device
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\device\__init__.py", line 125, in <module>
    _DEVICES = [
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\device\__init__.py", line 126, in <listcomp>
    _Device(i) for i in range(cudart.cudaGetDeviceCount())
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\device\__init__.py", line 113, in __init__
    self.attributes[kw] = cudart.cudaDeviceGetAttribute(idx, self._index)
  File "D:\Python\ChatGLM-6B\venv\lib\site-packages\cpm_kernels\library\base.py", line 83, in wrapper
    raise AttributeError("%s: undefined symbol: %s" % (self.__lib_path, name))
AttributeError: C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_30_9.dll: undefined symbol: cudaDeviceGetAttribute

从报错信息来看,这似乎是因为我的设备不满足cpm_kernels库的cuda版本要求
我会在找到解决方法后跟帖

@lwh9346
Copy link

lwh9346 commented Mar 15, 2023

似乎是因为没有安装cuda导致的问题
pytorch在只安装了显卡驱动的情况下似乎就可以进行gpu推理了,但是cpm_kernels不行,还得安装cuda
下载链接在下面
CUDA

@zeta-zl
Copy link
Author

zeta-zl commented Mar 15, 2023

在全部重新安装一遍之后解决了。非常感谢!

@zeta-zl zeta-zl closed this as completed Mar 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants