Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError("ERROR, something went wrong") while downloading model. maybe timeout? #88

Closed
pinsystem opened this issue Apr 24, 2023 · 3 comments

Comments

@pinsystem
Copy link

here's the whole error

C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\backend\utils.py:74: UserWarning: No audio backend is available. warnings.warn("No audio backend is available.") No GPU being used. Careful, inference might be extremely slow! found outdated text model, removing. 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.35G/5.35G [07:48<00:00, 11.4MiB/s] Downloading (…)solve/main/vocab.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 996k/996k [00:00<00:00, 11.4MB/s] Downloading (…)okenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 29.0/29.0 [00:00<00:00, 7.28kB/s] Downloading (…)lve/main/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 625/625 [00:00<00:00, 209kB/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [02:19<00:00, 1.39s/it] No GPU being used. Careful, inference might be extremely slow! 69%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 2.73G/3.93G [03:58<01:45, 11.4MiB/s] Traceback (most recent call last): File "C:\Users\Administrator\Desktop\bark_test.py", line 9, in <module> audio_array = generate_audio(text_prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\api.py", line 113, in generate_audio out = semantic_to_waveform( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\api.py", line 54, in semantic_to_waveform coarse_tokens = generate_coarse( ^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 580, in generate_coarse model = load_model(use_gpu=use_gpu, model_type="coarse") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 296, in load_model model = _load_model_f(ckpt_path, device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 233, in _load_model _download(model_info["path"], ckpt_path) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 165, in _download raise ValueError("ERROR, something went wrong") ValueError: ERROR, something went wrong

maybe it's something about connection timeout? if so, how can I increase it?

@BSalita
Copy link

BSalita commented Apr 24, 2023

Tips:

  1. Install pytorch from https://pytorch.org/get-started/locally/
  2. pip install simpleaudio
  3. The code below works on my Windows 11 system.
    from bark import SAMPLE_RATE, generate_audio, preload_models

    # download and load all models
    preload_models()

    # generate audio from text
    text_prompt = """
         Hello, my name is Suno. And, uh — and I like pizza. [laughs] 
         But I also have other interests such as playing tic tac toe.
    """

    audio_array = generate_audio(text_prompt, history_prompt="en_speaker_1") # history_prompt selects the speaker

    import simpleaudio as sa

    num_channels = 1
    bytes_per_sample = 4 # float32
    sample_rate = SAMPLE_RATE
    play_obj = sa.play_buffer(audio_array, num_channels, bytes_per_sample, SAMPLE_RATE)

    # Wait for playback to finish before exiting
    play_obj.wait_done()

@gkucsko
Copy link
Contributor

gkucsko commented Apr 24, 2023

you could try manually download the models with reconnection enabled: #46 (comment)

@pinsystem
Copy link
Author

@BSalita unfortunately pip install simpleaudio leads me another error :(
@gkucsko now it works thank you!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants