You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
have next error while trying to make transcription:
Error transcribing file on line Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
Traceback (most recent call last):
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1570, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1397, in postprocess_data
self.validate_outputs(fn_index, predictions) # type: ignore
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1371, in validate_outputs
raise ValueError(
ValueError: An event handler (transcribe_file) didn't receive enough output values (needed: 2, received: 1).
Wanted outputs:
[textbox, file]
Received outputs:
[None]
The text was updated successfully, but these errors were encountered:
Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
It seems that your CPU doesn't support the float16 compute type.
I assume you're using CPU instead of GPU because float16 is usually not supported in CPU.
You can change the compute type in the Advanced_Parameters tab.
If you are using CPU then you can use int8 or float32.
Which OS are you using?
Error transcribing file on line Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
warnings.warn(
Traceback (most recent call last):
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1570, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1397, in postprocess_data
self.validate_outputs(fn_index, predictions) # type: ignore
File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1371, in validate_outputs
raise ValueError(
ValueError: An event handler (transcribe_file) didn't receive enough output values (needed: 2, received: 1).
Wanted outputs:
[textbox, file]
Received outputs:
[None]
The text was updated successfully, but these errors were encountered: