Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

speed of LPCNet #1

Closed
attitudechunfeng opened this issue Oct 30, 2018 · 21 comments
Closed

speed of LPCNet #1

attitudechunfeng opened this issue Oct 30, 2018 · 21 comments

Comments

@attitudechunfeng
Copy link

Could I know about the speed of LPCNet?

@jmvalin
Copy link
Member

jmvalin commented Nov 21, 2018

Right now the Python code is much slower than real-time, but once converted to C, it should be faster than real-time on both desktops and phones.

@jmvalin
Copy link
Member

jmvalin commented Dec 2, 2018

If you look at the current master, there's now C code that runs in real-time with just 20% CPU, i.e. around 250 times faster than the Python code.

@gosha20777
Copy link

Yes, ind it s can work faster if organize data in memory (to prevent cache loss) arrange the data in memory (so that cache loss does not occur) and parallelize the execution of loops using, for example, OpenMP (#pragma omp parallel for)

@attitudechunfeng
Copy link
Author

I have tested the C code version, the speed is about 1.15 times faster that realtime, really great performance. However, considering the mel prediction model consumption and that current version doesn't seems to support a streaming process, there's space to optimize. Looking forward to further progresses.

@jmvalin
Copy link
Member

jmvalin commented Dec 12, 2018

What architecture and what NN model are you finding 1.15x faster than real-time? If you're on x86, you should see it run much faster, though previous models didn't have the block sparseness done properly and were slower. I recommend using master along with this model: https://jmvalin.ca/misc_stuff/lpcnet_models/lpcnet15_384_10_G16_100.h5
Note that you'll need to re-compute your features file since the definition has changed.

@attitudechunfeng
Copy link
Author

I tested on x86 arch and the model is lpcnet9_384_10_G16_120.h5. I'll try again as your recommendations.

@attitudechunfeng
Copy link
Author

The new version is about 3.95 times faster than realtime. It's really quite fast.

@attitudechunfeng
Copy link
Author

attitudechunfeng commented Dec 13, 2018

One more question, i've trained my own model. However, the model speed is much slower than your supported, is there something that i neglected ? thx.

@jmvalin
Copy link
Member

jmvalin commented Dec 13, 2018

Current master should generate models that will run just as fast as the one I linked to. If your model is slow, then maybe you're using a model trained with a version older than 7df3f9c. In any case, I'd recommend training with current master.

@attitudechunfeng
Copy link
Author

Thanks for your quick reply, with more training epochs(around 100~), the speed is normal. I guessed it maybe relates to the weights sparsity?

@jmvalin
Copy link
Member

jmvalin commented Dec 14, 2018

Indeed, if you don't let it train long enough, then the weights won't be sparse. If you look at the training code, you'll see something like:
Sparsify(2000, 40000, 400, (0.1, 0.1, 0.1))
It means start the sparsification at batch (not epoch) 2000, and continue until batch 40000, setting weights to zero every 400 batches, with all 3 GRU matrices having 10% non-zero weights.

@attitudechunfeng
Copy link
Author

Thanks for your answer. I think it's practical now, and i'll try a whole tts process in following experiments. Really great work.

@attitudechunfeng
Copy link
Author

I've tried to use the pre model to directly predict the 55 dimension features. However, the quality is not as good as expected. Any suggestions about the predicted features ?

@mrgloom
Copy link

mrgloom commented May 16, 2019

Is reference wav file is avalible to test preformance of master model? Does benchmarks should be runned using lpcnet_demo binary?

Also related #56

@alokprasad
Copy link

For me it took 6sec to convert from feature to wav for generating a wav ( audio file ) of 20sec. ?
it is expected or what numbers others are seeing.

@SylviaZiyuZhang
Copy link

For me it took 6sec to convert from feature to wav for generating a wav ( audio file ) of 20sec. ?
it is expected or what numbers others are seeing.

It's expected.

@ZhaoZeqing
Copy link

@jmvalin I use "./lpcnet_demo -synthesis x.lpc x.pcm" to generate wav from feature, but the speed is very slow, about 6sec to generate a 5sec wav, any suggestions about it? Thanks!

@carlfm01
Copy link

carlfm01 commented Sep 3, 2019

@ZhaoZeqing with AVX enabled?

@ZhaoZeqing
Copy link

@ZhaoZeqing with AVX enabled?

@carlfm01 It worked! Thanks!

@xiaoyangnihao
Copy link

@ZhaoZeqing with AVX enabled?

@carlfm01 It worked! Thanks!

how to get AVX enabled?

@jmvalin
Copy link
Member

jmvalin commented Jul 22, 2021

If your machine supports AVX, then just adding -march=native to the CFLAGS should be enough. See README.md for more on CFLAGS

@jmvalin jmvalin closed this as completed Oct 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants