-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version #40
Comments
Can you share your CPU and GPU models? I don't understand why you get the CUDA error having the |
How do I share that?
…On Tue, 16 Jul 2024, 22:13 HenestrosaDev, ***@***.***> wrote:
Can you share your CPU and GPU models? I don't understand why you get the
CUDA error having the Use CPU option checked. It would only make sense if
you were using the GPU to generate the transcription.
—
Reply to this email directly, view it on GitHub
<#40 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMSQ2BGZGRDLYGH2HHBUHHTZMWEH3AVCNFSM6AAAAABK6MZO72VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZRHAZTQMZWG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Follow these steps on Windows 10 to check your CPU model:
To find your GPU model on Windows 10, do the following:
|
CPU Model: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz 2.40 GHz |
|
Option 1 doesn't work. [google_api] [subtitles] [system] |
Change |
Yes, it works!
…On Fri, Jul 19, 2024 at 9:39 PM HenestrosaDev ***@***.***> wrote:
Change use_cpu = False to use_cpu = True. This should fix the problem. I
have to check why the value doesn't change to True on start.
—
Reply to this email directly, view it on GitHub
<#40 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMSQ2BDLWPERXOATZTG53D3ZNF2OZAVCNFSM6AAAAABK6MZO72VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBQGA3TCMJXGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
But I have another issue, I will open in another issue |
Steps to reproduce
I followed the guide here https://github.com/HenestrosaDev/audiotext#set-up-the-project-locally
With these options:
Expected behaviour
It should transcribe the audio file selected
Actual behaviour
Errors out: RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
File "C:\Users\HP\Documents\GitHub\Tarm\audiotext\src\handlers\whisperx_handler.py", line 30, in transcribe_file
model = whisperx.load_model(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\Documents\GitHub\Tarm\audiotext.venv\Lib\site-packages\whisperx\asr.py", line 288, in load_model
model = model or WhisperModel(whisper_arch,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\Documents\GitHub\Tarm\audiotext.venv\Lib\site-packages\faster_whisper\transcribe.py", line 133, in init
self.model = ctranslate2.models.Whisper(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version
System information
(Delete all statements that don't apply.)
The text was updated successfully, but these errors were encountered: