-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG/Help] 加载模型时遇到 RuntimeError: Library cuda is not initialized 问题 #839
Comments
通过查找源代码,在 # Line 21
def unix_find_lib(name):
cuda_path = os.environ.get("CUDA_PATH", None)
if cuda_path is not None:
lib_name = os.path.join(cuda_path, "lib64", "lib%s.so" % name)
if os.path.exists(lib_name):
return lib_name
cuda_path = "/usr/local/cuda"
if cuda_path is not None:
lib_name = os.path.join(cuda_path, "lib64", "lib%s.so" % name)
if os.path.exists(lib_name):
return lib_name
# Line 41
class Lib:
def __init__(self, name):
self.__name = name
if sys.platform.startswith("win"):
lib_path = windows_find_lib(self.__name)
self.__lib_path = lib_path
if lib_path is not None:
self.__lib = ctypes.WinDLL(lib_path)
else:
self.__lib = None
elif sys.platform.startswith("linux"):
lib_path = unix_find_lib(self.__name)
self.__lib_path = lib_path
if lib_path is not None:
self.__lib = ctypes.cdll.LoadLibrary(lib_path)
else:
self.__lib = None
else:
raise RuntimeError("Unknown platform: %s" % sys.platform) 我在第 52 行修改了代码: lib_path = unix_find_lib(self.__name)
# Edit Here
print(name, ':', lib_path)
self.__lib_path = lib_path 随后运行程序,发现缺少了 cuda 库(为什么?) |
经过检查是由于CUDA目录 |
我也是发现/usr/local/cuda/lib64 中缺少 libcuda.so 文件,我是这样建立的软连接:ln -s /usr/lib/x86_64-linux-gnu/libcuda.so /usr/local/cuda/lib64/libcuda.so,软连接建立后,问题解决 |
方法很有效~感谢~ |
Is there an existing issue for this?
Current Behavior
我在参考官方 chatglm-6b-int4 样例提供的代码来进行测试
使用 Yolov5 项目来跑目标检测,测试过 CUDA 正常可用
在 GLM 测试程序开头也添加了 CUDA 检测部分,输出正常
但是在实际运行模型的过程中,进行到
response, history = model.chat(tokenizer, "你好", history=[])
一句时,cpm_kernels 库提示了 RuntimeError: Library cuda is not initialized 问题(查询过网上的相关问题,但是都找不到相同的情况,需要求助各位大佬)
完整报错如下:
Expected Behavior
No response
Steps To Reproduce
目录结构:
test.py (or cli_demo.py)
chatglm-6b-int4
---- 从 huggingface/THUDM/ChatGLM-6B-int4 克隆的完整项目与权重
测试语句:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("chatglm-6b-int4", trust_remote_code=True)
model = AutoModel.from_pretrained("chatglm-6b-int4", trust_remote_code=True).half().cuda()
print("First:")
response, history = model.chat(tokenizer, "你好", history=[])
Environment
Anything else?
No response
The text was updated successfully, but these errors were encountered: