-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA problem help please #16
Comments
Install the pytorch-gpu module from here https://pytorch.org/get-started/locally/ |
Hi thanks for the response. For sure there something that I'm doing wrong, I'm not so expert. I don't know if I run the torch command int the right place/folder. If you can further help would super Thank you!! |
I've tried do downgrade the Nvidia Toolkit as I had the 12.1 Now I have reinstalled the Toolkit 11.8 version and reinstalled the releted pytorch That what my system show i have: **C:>python -c "import torch; print('PyTorch version:', torch.version)" C:>python -c "import torchvision; print('Torchvision version:', torchvision.version)" C:>nvcc --version |
Hello, I don't think that this is a problem with your cuda toolkit, it is an issue with pytorch installation. Did you run setup script? |
Yes i did, also tried to delete the folder and start a fresh setup |
Do one thing, activate the environment and install pytorch-gpu with pip |
Yesss that solved the issue!! Thank you very much |
it give me this error, I should have install Cuda toolkit already
To create a public link, set
share=True
inlaunch()
.ERROR: Exception in ASGI application
Traceback (most recent call last):
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in call
return await self.app(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\fastapi\applications.py", line 1054, in call
await super().call(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\middleware\errors.py", line 186, in call
raise exc
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\middleware\errors.py", line 164, in call
await self.app(scope, receive, _send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\gradio\route_utils.py", line 707, in call
await self.app(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\middleware\exceptions.py", line 65, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\routing.py", line 756, in call
await self.middleware_stack(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\routing.py", line 776, in app
await route.handle(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\routing.py", line 297, in handle
await self.app(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\routing.py", line 75, in app
await response(scope, receive, send)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\responses.py", line 352, in call
await send(
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette_exception_handler.py", line 50, in sender
await send(message)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette_exception_handler.py", line 50, in sender
await send(message)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\starlette\middleware\errors.py", line 161, in _send
await send(message)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 549, in send
raise RuntimeError("Response content shorter than Content-Length")
RuntimeError: Response content shorter than Content-Length
Traceback (most recent call last):
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\gradio\queueing.py", line 527, in process_events
response = await route_utils.call_process_api(
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\gradio\blocks.py", line 1847, in process_api
result = await self.call_function(
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\gradio\blocks.py", line 1433, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\gradio\utils.py", line 805, in wrapper
response = f(*args, **kwargs)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main\infer.py", line 58, in infer_image
free_memory = torch.cuda.mem_get_info()[0]
File "D:_A.I._2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\torch\cuda\memory.py", line 661, in mem_get_info
device = torch.cuda.current_device()
File "D:_A.I.2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\torch\cuda_init.py", line 769, in current_device
_lazy_init()
File "D:_A.I.2024\Lip_Wise-main\Lip_Wise-main.lip-wise\lib\site-packages\torch\cuda_init.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
The text was updated successfully, but these errors were encountered: