Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU not working #37

Closed
akwin1234 opened this issue Jun 17, 2024 · 4 comments
Closed

GPU not working #37

akwin1234 opened this issue Jun 17, 2024 · 4 comments
Assignees

Comments

@akwin1234
Copy link

akwin1234 commented Jun 17, 2024

PS D:\1Git\hallo> python scripts/inference.py --source_image .\img.jpg --driving_audio .\audio.wav
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
File "C:\Users\akash\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers_init_.py", line 55, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
File "C:\Users\akash\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\triton\softmax.py", line 11, in
import triton
ModuleNotFoundError: No module named 'triton'
WARNING:py.warnings:C:\Users\akash\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
warnings.warn(

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
WARNING:py.warnings:C:\Users\akash\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4

WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1718569438.682961 2464 face_landmarker_graph.cc:174] Sets FaceBlendshapesGraph acceleration to xnnpack by default.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W0000 00:00:1718569438.725895 23228 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1718569438.745459 19520 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
WARNING:py.warnings:C:\Users\akash\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\symbol_database.py:55: UserWarning: SymbolDatabase.GetPrototype() is deprecated. Please use message_factory.GetMessageClass() instead. SymbolDatabase.GetPrototype() will be removed soon.
warnings.warn('SymbolDatabase.GetPrototype() is deprecated. Please '

Processed and saved: ./.cache\img_sep_background.png
Processed and saved: ./.cache\img_sep_face.png
Some weights of Wav2VecModel were not initialized from the model checkpoint at ./pretrained_models/wav2vec/wav2vec2-base-960h and are newly initialized: ['wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original1', 'wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:audio_separator.separator.separator:Separator version 0.17.2 instantiating with output_dir: ./.cache\audio_preprocess, output_format: WAV
INFO:audio_separator.separator.separator:Operating System: Windows 10.0.22631
INFO:audio_separator.separator.separator:System: Windows Node: SmashingStar Release: 10 Machine: AMD64 Proc: Intel64 Family 6 Model 154 Stepping 3, GenuineIntel
INFO:audio_separator.separator.separator:Python Version: 3.10.11
INFO:audio_separator.separator.separator:PyTorch Version: 2.3.0+cu121
INFO:audio_separator.separator.separator:FFmpeg installed: ffmpeg version 2024-06-03-git-77ad449911-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
INFO:audio_separator.separator.separator:ONNX Runtime GPU package installed with version: 1.18.0
INFO:audio_separator.separator.separator:ONNX Runtime CPU package installed with version: 1.18.0
INFO:audio_separator.separator.separator:CUDA is available in Torch, setting Torch device to CUDA
WARNING:audio_separator.separator.separator:CUDAExecutionProvider not available in ONNXruntime, so acceleration will NOT be enabled
INFO:audio_separator.separator.separator:Loading model Kim_Vocal_2.onnx...
17.2kiB [00:00, 866kiB/s]
4.38kiB [00:00, 583kiB/s]
12.0kiB [00:00, 1.50MiB/s]
INFO:audio_separator.separator.separator:Load model duration: 00:00:13
INFO:audio_separator.separator.separator:Starting separation process for audio_file_path: .\audio.wav

Please tell me how do i fix this?
I put some 1 min+ video its been running since 3 hours.. i feel its not using GPU well.

@subazinga
Copy link
Contributor

We have not tested it on Windows platform. It seems that triton is not installed when installing xformers.

@AricGamma AricGamma self-assigned this Jun 20, 2024
@AricGamma
Copy link
Member

"WARNING:audio_separator.separator.separator:CUDAExecutionProvider not available in ONNXruntime, so acceleration will NOT be enabled"

This onnx model is used to separate vocals from audio. CPU is OK.
You can install onnxruntime-gpu to bypass this warning.

@danieldunderfelt
Copy link
Contributor

This onnx model is used to separate vocals from audio. CPU is OK. You can install onnxruntime-gpu to bypass this warning.

Yes, onnxruntime-gpu works. And crucially, also uninstall onnxruntime.

@AricGamma
Copy link
Member

Closing this issue. If any other question, please open a new one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants