Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[issue] Torch not compiled with CUDA enabled (type=assertion_error) #4

Closed
panwanke opened this issue May 23, 2023 · 2 comments
Closed

Comments

@panwanke
Copy link

笔记本,4070 8G,启动模型时遇到以下错误:

INFO: Started server process [9288]
INFO: Waiting for application startup.
torch found: D:\04programs\RWKV_LMM\py310\Lib\site-packages\torch\lib
torch set
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:53777 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:53777 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:53777 - "GET /status HTTP/1.1" 200 OK
max_tokens=4100 temperature=1.2 top_p=0.5 presence_penalty=0.4 frequency_penalty=0.4
INFO: 127.0.0.1:53777 - "POST /update-config HTTP/1.1" 200 OK
RWKV_JIT_ON 1 RWKV_CUDA_ON 0 RESCALE_LAYER 6

Loading models/RWKV-4-Raven-3B-v11-Eng49%-Chn49%-Jpn1%-Other1%-20230429-ctx4096.pth ...
Strategy: (total 32+1=33 layers)

  • cuda [float16, uint8], store 33 layers
    0-cuda-float16-uint8 1-cuda-float16-uint8 2-cuda-float16-uint8 3-cuda-float16-uint8 4-cuda-float16-uint8 5-cuda-float16-uint8 6-cuda-float16-uint8 7-cuda-float16-uint8 8-cuda-float16-uint8 9-cuda-float16-uint8 10-cuda-float16-uint8 11-cuda-float16-uint8 12-cuda-float16-uint8 13-cuda-float16-uint8 14-cuda-float16-uint8 15-cuda-float16-uint8 16-cuda-float16-uint8 17-cuda-float16-uint8 18-cuda-float16-uint8 19-cuda-float16-uint8 20-cuda-float16-uint8 21-cuda-float16-uint8 22-cuda-float16-uint8 23-cuda-float16-uint8 24-cuda-float16-uint8 25-cuda-float16-uint8 26-cuda-float16-uint8 27-cuda-float16-uint8 28-cuda-float16-uint8 29-cuda-float16-uint8 30-cuda-float16-uint8 31-cuda-float16-uint8 32-cuda-float16-uint8
    emb.weight f16 cpu 50277 2560
    1 validation error for RWKV
    root
    Torch not compiled with CUDA enabled (type=assertion_error)
    INFO: 127.0.0.1:53775 - "POST /switch-model HTTP/1.1" 500 Internal Server Error
    ERROR: Exception in ASGI application
    Traceback (most recent call last):
    File "D:\04programs\RWKV_LMM\backend-python\routes\config.py", line 36, in switch_model
    RWKV(
    File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
    pydantic.error_wrappers.ValidationError: 1 validation error for RWKV
    root
    Torch not compiled with CUDA enabled (type=assertion_error)
@josStorer
Copy link
Owner

把site-packages\torch目录删了运行,会提示安装依赖
我认为应该是自动安装依赖弹出的第二个窗口被非正常关闭了,导致安装了cpu版本的pytorch

@panwanke
Copy link
Author

把site-packages\torch目录删了运行,会提示安装依赖 我认为应该是自动安装依赖弹出的第二个窗口被非正常关闭了,导致安装了cpu版本的pytorch

的确实这个问题,感谢您的回复

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants