You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I Quantify speedyspeech and mb_melban models using paddle-lite with the following command.
./opt_linux_x86 --model_file=speedyspeech_csmsc.pdmodel --param_file=speedyspeech_csmsc.pdiparams --valid_targets=arm --optimize_out_type=naive_buffer --optimize_out=./speedyspeech_csmsc_int8.nb --quant_model=true --quant_type=QUANT_INT8
and copy the two models to TTSAndroid/app/src/main/assets/models/cpu/, then change model names in mainactivity.java, finally change input tensor type from float to long.
But no decrease of cpu usage or infer time is seen on my armv8 cpu phone.
So I want to ask whether int8 infer is supported or not?
Any help is appreciated.
The text was updated successfully, but these errors were encountered:
General Question
I Quantify speedyspeech and mb_melban models using paddle-lite with the following command.
./opt_linux_x86 --model_file=speedyspeech_csmsc.pdmodel --param_file=speedyspeech_csmsc.pdiparams --valid_targets=arm --optimize_out_type=naive_buffer --optimize_out=./speedyspeech_csmsc_int8.nb --quant_model=true --quant_type=QUANT_INT8
and copy the two models to TTSAndroid/app/src/main/assets/models/cpu/, then change model names in mainactivity.java, finally change input tensor type from float to long.
But no decrease of cpu usage or infer time is seen on my armv8 cpu phone.
So I want to ask whether int8 infer is supported or not?
Any help is appreciated.
The text was updated successfully, but these errors were encountered: