Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reactor wont work cause of cuda / onxruntime #388

Open
3 tasks done
GIadstone opened this issue Mar 8, 2024 · 4 comments
Open
3 tasks done

Reactor wont work cause of cuda / onxruntime #388

GIadstone opened this issue Mar 8, 2024 · 4 comments

Comments

@GIadstone
Copy link

First, confirm

  • I have read the instruction carefully
  • I have searched the existing issues
  • I have updated the extension to the latest version

What happened?

Won't work, here is the code :

16:06:31 - ReActor - STATUS - Working: source face index [0], target face index [0]████| 29/29 [00:17<00:00, 1.52it/s]
16:06:31 - ReActor - STATUS - Analyzing Source Image...
2024-03-08 16:06:31.5076937 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.


2024-03-08 16:06:31.6020997 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*** Error running postprocess_image: C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py
Traceback (most recent call last):
File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui-1.8.0\modules\scripts.py", line 856, in postprocess_image
    script.postprocess_image(p, pp, *script_args)
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 391, in postprocess_image
    result, output, swapped = swap_face(
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 515, in swap_face
    source_faces = analyze_faces(source_img)
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 274, in analyze_faces
    face_analyser = copy.deepcopy(getAnalysisModel())
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 118, in getAnalysisModel
    ANALYSIS_MODEL = insightface.app.FaceAnalysis(
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 48, in patched_faceanalysis_init
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 21, in patched_get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise fallback_error from e
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

I've put PATH to the right directories for cuda, and set cuda_home.
I've updated onxruntime for cuda 12.3
This bug happened after I updated to SD 1.8

Any help is welcome.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

Sysinfo

Windows 10 - Nvidia Rtx 2070 - intel i5 6th g - Stabble diffusion - Reactor

Relevant console log

16:06:31 - ReActor - STATUS - Working: source face index [0], target face index [0]████| 29/29 [00:17<00:00,  1.52it/s]
16:06:31 - ReActor - STATUS - Analyzing Source Image...
2024-03-08 16:06:31.5076937 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-03-08 16:06:31.6020997 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*** Error running postprocess_image: C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py
    Traceback (most recent call last):
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.


    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\stable-diffusion-webui-1.8.0\modules\scripts.py", line 856, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 391, in postprocess_image
        result, output, swapped = swap_face(
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 515, in swap_face
        source_faces = analyze_faces(source_img)
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 274, in analyze_faces
        face_analyser = copy.deepcopy(getAnalysisModel())
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 118, in getAnalysisModel
        ANALYSIS_MODEL = insightface.app.FaceAnalysis(
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 48, in patched_faceanalysis_init
        model = model_zoo.get_model(onnx_file, **kwargs)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
        model = router.get_model(providers=providers, provider_options=provider_options)
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 21, in patched_get_model
        session = PickableInferenceSession(self.onnx_file, **kwargs)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
        super().__init__(model_path, **kwargs)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
        raise fallback_error from e
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
        self._create_inference_session(self._fallback_providers, None)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

Additional information

No response

@GIadstone GIadstone added bug Something isn't working new labels Mar 8, 2024
@Gourieff
Copy link
Owner

Gourieff commented Mar 9, 2024

I've updated onxruntime for cuda 12.3

https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
Please rollback to CUDA 12.2 or 12.1
There's no info about 12.3 support for ORT-GPU

@Gourieff Gourieff added ⛔ dependencies conflict and removed bug Something isn't working new labels Mar 9, 2024
@GIadstone
Copy link
Author

GIadstone commented Mar 9, 2024 via email

@dislive
Copy link

dislive commented Mar 14, 2024

same problem with new update to 1.8
CUDA 12.2

@drdancm
Copy link

drdancm commented Mar 22, 2024

I've had tons of trouble with A1111 ever since the update from 1.6 (I'm running inside Stability Matrix) even though Reactor was working fine w 1.6. Then I installed stablediffusion Forge and it solved a lot of problems, so now Reactor works as well as ever, possibly even better.

https://github.com/lllyasviel/stable-diffusion-webui-forge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants