-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimizer further by using fake-quantize aware and pruning #47
Comments
@gongchenghhu hi, it's not ez to convert Tacotron2 to tflite, need to write custom C++ ops. Even you can convert and run it on low-device, it's still cann't run real time :(. I suggest you use FastSpeech instead. For fake-quantize aware and pruning, you can refer official doc here (https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide). I don't really know how much tensorflow support fake-quantize and pruning for LSTM, but if model is just consist of Convolution, everything would be ok, i think. BTW, how was a quality of ur tacotron ? |
@dathudeptrai Thanks for your reply. I will try the fake-quantize aware and pruning (https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide) |
@gongchenghhu @dathudeptrai i am also trying to convert fastspeech model to Tflite and it is crashing in length regulator |
@gongchenghhu @dathudeptrai you can follow this issue for TFLite conversion, tensorflow/tensorflow#40504 , can you also tell help me with this ? |
@manmay-nakhashi ok, I will see your issue. |
@gongchenghhu @manmay-nakhashi sorry :)). Hope Tensorflower can help you in this case :D |
@dathudeptrai haha :P |
@gongchenghhu i was able to convert melgan to tflite , i think we can try to convert Fastspeeh2 to tflite , structurally model code seems to be easier then tacotron2 and fastspeech |
@manmay-nakhashi Could it be possible for you to share how you managed to convert melgan to tflite? Is the tflite model capable of inferencing? |
Thanks for your this great job.
I have trained Tacotron2 model as your repository.
Now I am trying to convert our model to a int8 quantization model with Tensorflow Lite.
But I encountered some errors,when I use
converter = tf.lite.TFLiteConverter.from_saved_model("./test_saved")
or
converter = tf.lite.TFLiteConverter.from_keras_model(tacotron2)
.And do you have any advise about optimizer further by using fake-quantize aware and pruning?
Thank you very much.
The text was updated successfully, but these errors were encountered: