Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

speaker selection on inference on finetuned libritts #31

Closed
eschmidbauer opened this issue Nov 15, 2023 · 3 comments
Closed

speaker selection on inference on finetuned libritts #31

eschmidbauer opened this issue Nov 15, 2023 · 3 comments

Comments

@eschmidbauer
Copy link

Hello- thanks again for sharing this project. The output quality is very impressive.
I was able to finetune the libritts model you shared with another voice to 199 steps.
Is there a way to select the speaker from the model? Im getting difference speaker outputs each time I run inference. Also- is a reference clip required? I would like to just get inference from the finetuned model without using a reference clip to see how it performs.

@yl4579
Copy link
Owner

yl4579 commented Nov 16, 2023

How many speakers do you have? If it is a single speaker dataset you are finetuning, you do not need any reference (and can even set multispeaker flag to false and do not load the pretrained diffusion model). Otherwise you do need a reference in the same way as the base model because the model needs to know the target speaker you want to synthesize. Or you can just hard code the speaker embeddings as a part of the model weights if you do not want any reference.

@eschmidbauer
Copy link
Author

My dataset is 1 speaker but libritts is many speakers. I used the libritts model you shared to finetune. I will try setting multispeaker flag to false

@yl4579
Copy link
Owner

yl4579 commented Nov 17, 2023

It should work even if you set multispeaker flag to true. You just need a arbitrary reference audio from the training set. You can save this as a part of the parameters. For example,

text = '''Maltby and Company would issue warrants on them deliverable to the importer, and the goods were then passed to be stored in neighboring warehouses.
'''

reference_dicts = {}
reference_dicts['LJSpeech'] = "data/LJSpeech-1.1/wavs/LJ001-0001.wav"

start = time.time()
noise = torch.randn(1,1,256).to(device)
for k, path in reference_dicts.items():
    ref_s = compute_style(path)
    
    wav = inference(text, ref_s, alpha=0.9, beta=0.9, diffusion_steps=10, embedding_scale=1)
    rtf = (time.time() - start) / (len(wav) / 24000)
    print(f"RTF = {rtf:5f}")
    import IPython.display as ipd
    print(k + ' Synthesized:')
    display(ipd.Audio(wav, rate=24000, normalize=False))
    print('Reference:')
    display(ipd.Audio(path, rate=24000, normalize=False))

@yl4579 yl4579 closed this as completed Nov 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants