Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

does TTSAndroid support int8 infer? #4026

Open
elliotzheng opened this issue Mar 20, 2025 · 1 comment
Open

does TTSAndroid support int8 infer? #4026

elliotzheng opened this issue Mar 20, 2025 · 1 comment
Assignees
Labels

Comments

@elliotzheng
Copy link

General Question

I Quantify speedyspeech and mb_melban models using paddle-lite with the following command.
./opt_linux_x86 --model_file=speedyspeech_csmsc.pdmodel --param_file=speedyspeech_csmsc.pdiparams --valid_targets=arm --optimize_out_type=naive_buffer --optimize_out=./speedyspeech_csmsc_int8.nb --quant_model=true --quant_type=QUANT_INT8
and copy the two models to TTSAndroid/app/src/main/assets/models/cpu/, then change model names in mainactivity.java, finally change input tensor type from float to long.
But no decrease of cpu usage or infer time is seen on my armv8 cpu phone.
So I want to ask whether int8 infer is supported or not?
Any help is appreciated.

@zxcd zxcd self-assigned this Mar 21, 2025
@zxcd
Copy link
Collaborator

zxcd commented Mar 21, 2025

Maybe you can try convert model to quantized model first , such as:

# PTQ_dynamic
if [ ${stage} -le 9 ] && [ ${stop_stage} -ge 9 ]; then
./local/PTQ_dynamic.sh ${train_output_path} fastspeech2_csmsc 8
# ./local/PTQ_dynamic.sh ${train_output_path} pwgan_csmsc 8
# ./local/PTQ_dynamic.sh ${train_output_path} mb_melgan_csmsc 8
# ./local/PTQ_dynamic.sh ${train_output_path} hifigan_csmsc 8
fi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants