-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
recommended CUDA, cuDNN and onnxruntime for Windows 11? #604
Comments
This "D:\a_work" directory really confuses me. Not only this is giving me errors when using rembg, when I even try to run ComfyUI's own pip.exe in the command prompt I get an error referencing that mysterious directory. It doesn't exist. The pip.exe error was fixed when I created a symbolic link to Comfy's installation directory on "D:\a". This was just a bandaid fix that didn't do anything for the rembg error. |
This issue is stale because it has been open for 30 days with no activity. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
I have been encountering this bug recently as well. Full message is:
Pretty sure I have cuDNN installed so not sure what's up. |
When I first installed rembg it worked on an image. Then it stopped working when I tried the "p" parameter. Not sure what I did. I keep getting errors that my NVIDIA libraries aren't talking to each other. Like:
2024-03-07 17:04:54.4301199 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\mediapc\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
then... RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
Any advice greatly appreciated!
The text was updated successfully, but these errors were encountered: