Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA_PATH is set but CUDA wasn't able to be loaded. #32

Closed
RickyWang111 opened this issue Aug 25, 2023 · 0 comments
Closed

CUDA_PATH is set but CUDA wasn't able to be loaded. #32

RickyWang111 opened this issue Aug 25, 2023 · 0 comments

Comments

@RickyWang111
Copy link

RickyWang111 commented Aug 25, 2023

python3.10.11
RTX3090 24G

my install steps:
git clone facefusion
pip install -r requirements.txt
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.15.1
python run.py --execution-providers cuda

image

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    result = await self.call_function(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\blocks.py", line 1107, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "F:\__4__code\gc_python\facefusion\facefusion\uis\components\preview.py", line 89, in update
    preview_frame = extract_preview_frame(temp_frame)
  File "F:\__4__code\gc_python\facefusion\facefusion\uis\components\preview.py", line 97, in extract_preview_frame
    source_face = get_one_face(cv2.imread(facefusion.globals.source_path)) if facefusion.globals.source_path else None
  File "F:\__4__code\gc_python\facefusion\facefusion\face_analyser.py", line 30, in get_one_face
    many_faces = get_many_faces(frame)
  File "F:\__4__code\gc_python\facefusion\facefusion\face_analyser.py", line 41, in get_many_faces
    faces = get_face_analyser().get(frame)
  File "F:\__4__code\gc_python\facefusion\facefusion\face_analyser.py", line 18, in get_face_analyser
    FACE_ANALYSER = insightface.app.FaceAnalysis(name = 'buffalo_l', providers = facefusion.globals.execution_providers)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\app\face_analysis.py", line 31, in __init__
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 394, in __init__
    raise fallback_error from e
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 389, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

image
image
image

@RickyWang111 RickyWang111 closed this as not planned Won't fix, can't repro, duplicate, stale Aug 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant