Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incredible! How to run inference on a custom file? #1

Closed
youssefavx opened this issue Apr 7, 2022 · 3 comments
Closed

Incredible! How to run inference on a custom file? #1

youssefavx opened this issue Apr 7, 2022 · 3 comments

Comments

@youssefavx
Copy link

Super impressed by your results! Curious to know how I could run a sample audio file through your model to upsample it. It seems the code provided here simply evaluates the model: https://github.com/haoheliu/ssr_eval/tree/main/examples/NVSR

I'll try to figure it out from that but would love any help whatsoever. No pressure whatsoever if busy though!

@youssefavx
Copy link
Author

Also does this require fine-tuning on a custom voice for good results?

@youssefavx
Copy link
Author

Okay I think I figured it out, please let me know if I'm using it incorrectly though:

import soundfile as sf

if(torch.cuda.is_available()): device = "cuda"
else: device="cpu"

testee = NVSRPostProcTestee(device)
x, _ = librosa.load("Sample.wav", sr=44100)
result = testee.infer(x)
sf.write('result.wav', result, 44100)

@haoheliu
Copy link
Owner

Yes, that's the way it works :)

Okay I think I figured it out, please let me know if I'm using it incorrectly though:

import soundfile as sf

if(torch.cuda.is_available()): device = "cuda"
else: device="cpu"

testee = NVSRPostProcTestee(device)
x, _ = librosa.load("Sample.wav", sr=44100)
result = testee.infer(x)
sf.write('result.wav', result, 44100)

Yes, that's how it works :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants