Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

切换模型失败 #17

Closed
zmengle opened this issue May 24, 2023 · 5 comments
Closed

切换模型失败 #17

zmengle opened this issue May 24, 2023 · 5 comments

Comments

@zmengle
Copy link

zmengle commented May 24, 2023

�[32mINFO�[0m: Started server process [�[36m13476�[0m]
�[32mINFO�[0m: Waiting for application startup.
torch found: F:\video\rwkv\py310\Lib\site-packages\torch\lib
torch set
�[32mINFO�[0m: Application startup complete.
�[32mINFO�[0m: Uvicorn running on �[1mhttp://127.0.0.1:8000�[0m (Press CTRL+C to quit)
�[32mINFO�[0m: 127.0.0.1:6372 - "�[1mGET / HTTP/1.1�[0m" �[32m200 OK�[0m
�[32mINFO�[0m: 127.0.0.1:6372 - "�[1mGET / HTTP/1.1�[0m" �[32m200 OK�[0m
�[32mINFO�[0m: 127.0.0.1:6372 - "�[1mGET /status HTTP/1.1�[0m" �[32m200 OK�[0m
max_tokens=4100 temperature=1.2 top_p=0.5 presence_penalty=0.4 frequency_penalty=0.4
�[32mINFO�[0m: 127.0.0.1:6372 - "�[1mPOST /update-config HTTP/1.1�[0m" �[32m200 OK�[0m
RWKV_JIT_ON 1 RWKV_CUDA_ON 0 RESCALE_LAYER 6

Loading models/RWKV-4-Raven-3B-v11-Eng49%-Chn49%-Jpn1%-Other1%-20230429-ctx4096.pth ...
Strategy: (total 32+1=33 layers)

  • cuda [float16, uint8], store 33 layers
    0-cuda-float16-uint8 1-cuda-float16-uint8 2-cuda-float16-uint8 3-cuda-float16-uint8 4-cuda-float16-uint8 5-cuda-float16-uint8 6-cuda-float16-uint8 7-cuda-float16-uint8 8-cuda-float16-uint8 9-cuda-float16-uint8 10-cuda-float16-uint8 11-cuda-float16-uint8 12-cuda-float16-uint8 13-cuda-float16-uint8 14-cuda-float16-uint8 15-cuda-float16-uint8 16-cuda-float16-uint8 17-cuda-float16-uint8 18-cuda-float16-uint8 19-cuda-float16-uint8 20-cuda-float16-uint8 21-cuda-float16-uint8 22-cuda-float16-uint8 23-cuda-float16-uint8 24-cuda-float16-uint8 25-cuda-float16-uint8 26-cuda-float16-uint8 27-cuda-float16-uint8 28-cuda-float16-uint8 29-cuda-float16-uint8 30-cuda-float16-uint8 31-cuda-float16-uint8 32-cuda-float16-uint8
    emb.weight f16 cpu 50277 2560
    1 validation error for RWKV
    root
    Torch not compiled with CUDA enabled (type=assertion_error)
    INFO: 127.0.0.1:6375 - "POST /switch-model HTTP/1.1" 500 Internal Server Error
    ERROR: Exception in ASGI application
    Traceback (most recent call last):
    File "F:\video\rwkv\backend-python\routes\config.py", line 36, in switch_model
    RWKV(
    File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
    pydantic.error_wrappers.ValidationError: 1 validation error for RWKV
    root
    Torch not compiled with CUDA enabled (type=assertion_error)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\video\rwkv\py310\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "F:\video\rwkv\py310\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "F:\video\rwkv\py310\Lib\site-packages\fastapi\applications.py", line 276, in call
await super().call(scope, receive, send)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "F:\video\rwkv\py310\Lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\middleware\cors.py", line 92, in call
await self.simple_response(scope, receive, send, request_headers=headers)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
await self.app(scope, receive, send)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "F:\video\rwkv\py310\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "F:\video\rwkv\py310\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "F:\video\rwkv\py310\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "F:\video\rwkv\py310\Lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "F:\video\rwkv\py310\Lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "F:\video\rwkv\py310\Lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "F:\video\rwkv\py310\Lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\video\rwkv\py310\Lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\video\rwkv\py310\Lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\video\rwkv\backend-python\routes\config.py", line 45, in switch_model
raise HTTPException(status.HTTP_500_INTERNAL_SERVER_ERROR, "failed to load")
AttributeError: 'function' object has no attribute 'HTTP_500_INTERNAL_SERVER_ERROR'

@josStorer
Copy link
Owner

检查一下 py310\Lib\site-packages 目录下 torch的版本号是1.13,应该有这个文件夹torch-1.13.1+cu117.dist-info

@zmengle
Copy link
Author

zmengle commented May 25, 2023

好的 我试一下

@zmengle
Copy link
Author

zmengle commented May 25, 2023

图片
版本是 torch-2.0.1.dist-info
图片
这个cu的没找到

@josStorer
Copy link
Owner

把两个torch目录删除,然后让它自己重装依赖

@zmengle
Copy link
Author

zmengle commented May 25, 2023

ok 刚看到其他的解决,在处理,谢了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants