Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problem with gradio #101

Open
DenysReryt opened this issue Feb 29, 2024 · 1 comment
Open

problem with gradio #101

DenysReryt opened this issue Feb 29, 2024 · 1 comment

Comments

@DenysReryt
Copy link

Which OS are you using?

  • OS: Linux
  • have next error while trying to make transcription:
    Error transcribing file on line Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
    /root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
    warnings.warn(
    Traceback (most recent call last):
    File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/queueing.py", line 495, in call_prediction
    output = await route_utils.call_process_api(
    File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
    File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1570, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
    File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1397, in postprocess_data
    self.validate_outputs(fn_index, predictions) # type: ignore
    File "/root/VisualCode/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1371, in validate_outputs
    raise ValueError(
    ValueError: An event handler (transcribe_file) didn't receive enough output values (needed: 2, received: 1).
    Wanted outputs:
    [textbox, file]
    Received outputs:
    [None]
@jhj0517
Copy link
Owner

jhj0517 commented Mar 1, 2024

Hi.

Requested float16 compute type, but the target device or backend do not support efficient float16 computation.

It seems that your CPU doesn't support the float16 compute type.
I assume you're using CPU instead of GPU because float16 is usually not supported in CPU.
You can change the compute type in the Advanced_Parameters tab.

zcxzx

If you are using CPU then you can use int8 or float32.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants