Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimizer further by using fake-quantize aware and pruning #47

Closed
gongchenghhu opened this issue Jun 16, 2020 · 9 comments
Closed

optimizer further by using fake-quantize aware and pruning #47

gongchenghhu opened this issue Jun 16, 2020 · 9 comments
Assignees
Labels
confict of interest 🤣 Won't help because conflict of interst. Discussion 😁 Discuss new feature question ❓ Further information is requested Tacotron Tacotron related question. TFLite TFlite question

Comments

@gongchenghhu
Copy link

gongchenghhu commented Jun 16, 2020

Thanks for your this great job.
I have trained Tacotron2 model as your repository.
Now I am trying to convert our model to a int8 quantization model with Tensorflow Lite.
But I encountered some errors,when I use
converter = tf.lite.TFLiteConverter.from_saved_model("./test_saved")
or
converter = tf.lite.TFLiteConverter.from_keras_model(tacotron2).
And do you have any advise about optimizer further by using fake-quantize aware and pruning?
Thank you very much.

@dathudeptrai dathudeptrai self-assigned this Jun 16, 2020
@dathudeptrai dathudeptrai added Discussion 😁 Discuss new feature question ❓ Further information is requested Tacotron Tacotron related question. TFLite TFlite question labels Jun 16, 2020
@dathudeptrai
Copy link
Collaborator

dathudeptrai commented Jun 16, 2020

@gongchenghhu hi, it's not ez to convert Tacotron2 to tflite, need to write custom C++ ops. Even you can convert and run it on low-device, it's still cann't run real time :(. I suggest you use FastSpeech instead. For fake-quantize aware and pruning, you can refer official doc here (https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide). I don't really know how much tensorflow support fake-quantize and pruning for LSTM, but if model is just consist of Convolution, everything would be ok, i think. BTW, how was a quality of ur tacotron ?

@gongchenghhu
Copy link
Author

@dathudeptrai Thanks for your reply. I will try the fake-quantize aware and pruning (https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide)

@manmay-nakhashi
Copy link

@gongchenghhu @dathudeptrai i am also trying to convert fastspeech model to Tflite and it is crashing in length regulator

@manmay-nakhashi
Copy link

@gongchenghhu @dathudeptrai you can follow this issue for TFLite conversion, tensorflow/tensorflow#40504 , can you also tell help me with this ?

@gongchenghhu
Copy link
Author

@manmay-nakhashi ok, I will see your issue.

@dathudeptrai
Copy link
Collaborator

@gongchenghhu @manmay-nakhashi sorry :)). Hope Tensorflower can help you in this case :D

@dathudeptrai dathudeptrai added the confict of interest 🤣 Won't help because conflict of interst. label Jun 19, 2020
@manmay-nakhashi
Copy link

@dathudeptrai haha :P

@manmay-nakhashi
Copy link

@gongchenghhu i was able to convert melgan to tflite , i think we can try to convert Fastspeeh2 to tflite , structurally model code seems to be easier then tacotron2 and fastspeech
@dathudeptrai do you think that would be good addition to repo ?

@sujeendran
Copy link

@manmay-nakhashi Could it be possible for you to share how you managed to convert melgan to tflite? Is the tflite model capable of inferencing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confict of interest 🤣 Won't help because conflict of interst. Discussion 😁 Discuss new feature question ❓ Further information is requested Tacotron Tacotron related question. TFLite TFlite question
Projects
None yet
Development

No branches or pull requests

4 participants