New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU/cuda available, but GPU not used? #552
Comments
Update: Unfortunately, it also required modifying the transformers/...bark code to get all tensors onto the GPU. Nevertheless, my output is now fast (not sure why but it varies between, say, 17s and 50s): $ time ./bark
{annoying tensorflow messages}
Loading autoprocessor...
Loading bark model...
{torch.nn.utils.weight_norm deprecation warnings}
Processor()ing...
Generate()ing...
The attention mask and the pad token id were not set. As a consequence, you mayobserve unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:10000 for open-end generation.
real 0m20.381s
user 0m17.566s
sys 0m2.988s The new code, forcing to GPU: #!/usr/bin/env python3
from transformers import AutoProcessor, BarkModel
import torch
device = torch.device("cuda:0")
print("Loading autoprocessor...")
processor = AutoProcessor.from_pretrained("suno/bark")
print("Loading bark model...")
model = BarkModel.from_pretrained("suno/bark").to(device)
voice_preset = "v2/en_speaker_0"
print("Processor()ing...")
inputs = processor("Mazda alone is the adorable-most.",
voice_preset=voice_preset)
for key, value in inputs.items():
if torch.is_tensor(value):
inputs[key] = value.to(device)
print("Generate()ing...")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
import scipy
sample_rate = model.generation_config.sample_rate
scipy.io.wavfile.write("bark_out.wav", rate=sample_rate, data=audio_array) And over in if voice_preset is not None:
self._validate_voice_preset_dict(voice_preset, **kwargs)
+ import torch
+ device = torch.device("cuda:0")
+ voice_preset_tensors = {
+ key: torch.from_numpy(value).to(device)
+ for key, value in voice_preset.items()
+ }
+ voice_preset = BatchFeature(data=voice_preset_tensors, tensor_type=return_tensors)
- voice_preset = BatchFeature(data=voice_preset, tensor_type=return_tensors) |
Thank you for this! 🙇 I used this to monkey patch |
I'm in Linux, but I'm getting lots of memory and cpu use, but not seeing any gpu use at all (unless it suddenly uses it for a split second at the end).
Inference takes a long time -- like 6 minutes for a 24 word sentence.
The text was updated successfully, but these errors were encountered: