-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving entire model #23
Comments
@anasvaf i won't suggest you save entire model h5, It does not guarantee success. After you load weight from h5 file, you can save it into pb file then do inference on server. Or you can try save_format="tf". On TF 2, we won't use h5 to save entire model anymore :)), see https://www.tensorflow.org/api_docs/python/tf/saved_model/save. |
@dathudeptrai Thank you for the prompt response! |
@anasvaf i will do it for you tonight :))). You just want to know how to save to pb ? |
@dathudeptrai Yes, saving the model as pb would be really helpful, so I can use post-training quantization and try to import it on a raspberry to check the latency on the inference of mel prediction :) |
@anasvaf let try :)). some how tacotron._build() make it can not to be able save to pb. :)) |
@dathudeptrai Thank you so much! :) works like a charm :). I can get the pb file. Do you know the name of the mel_outputs tensor? I mean in the variables.data file what should be the name, as a string? Something like: "post_net/tf_tacotron_conv_batch_norm_9/batch_norm_._4/moving_variance" |
@anasvaf why you need the name, you can use tf.saved_model.load and do inference as above code ?. you can print(mel_outputs) to get a name. |
tensorflow.python.saved_model.nested_structure_coder.NotEncodableError: No encoder for object [tf.Tensor(2000, shape=(), dtype=int32)] of type [<class 'tensorflow.python.framework.ops.EagerTensor'>]. |
@manmay-nakhashi i fixed it today :)) pls git pull :D |
ok thanks |
@manmay-nakhashi @anasvaf i think you guys need "watch" my repo, to be sure you guys won't missing any update. I will update multiban melgan soon, it's 3x faster than melgan and quality is better. |
sure @dathudeptrai : )) |
@dathudeptrai I will try printing the tf.Tensor to check the its node name. The reason that I asked is that if you build TF from source and deploy it on Android, my guess is that you would need to specify the node input/output name for the .pb file (as in line 74 https://github.com/googlecodelabs/tensorflow-for-poets-2/blob/master/android/tfmobile/src/org/tensorflow/demo/ClassifierActivity.java) Also another question for the frozen file. When loading the model, it holds the property of the input_id length and cannot accept smaller or larger sentences. I tried to zero pad for smaller ones but I get a weird wav file. Any thoughts on that? |
Send me a code that u are using. |
@anasvaf @dathudeptrai i am trying to convert this model to tflite model , but saved_model dosen't have any signatures do you know why , and how can i add it ? |
@dathudeptrai This is the code for inference.
And the output I am getting is, since I have saved the pb file with a larger sentence:
|
@manmay-nakhashi I am not sure if you can get a tflite from the pb file, since there are multiple @tf.function definitions in the model. E.g., on the call and infer function located at models/tacotron2.
|
ok, i will try to fix those issues tonight. Maybe we should merge call and inference into call function only or call inference function inside a call function |
|
@dathudeptrai @anasvaf i think it would be best if we can convert this to tflite for faster inference to mobile devices and embedded. |
@manmay-nakhashi you can still quantize the weights on the .pb file. At the moment it is only 2.6 MB (consisting of variableOPs). If you build TensorFlow for mobile from source you can still perform quite fast inference on mobile, utilizing only the CPU. Not sure how much you can speed up Tacotron-2 with the TFLite that can use the GPU. Notice that the most computationally intensive operations, based on the dynamic input, are the Entering and Exiting the while loop on the encoder-decoder. |
@anasvaf tflite works on flat buffer and tensorflow pb file is protobuf , flat buffer is faster mostly on low end devices. |
@anasvaf @manmay-nakhashi pls close if it solve ur problem. I don't think we can convert Tacotron to tflite, even we can do that, there is no way make it can be run real-time on mobile devices. |
@dathudeptrai thank you so much for all your help!! :)) |
|
@gongchenghhu Unfortunately I was not able to do it. There are also some missing ops regarding Tacotron2 that need to be written in C++ |
Hello I tried to save the entire model for Tacotron-2, instead of the weights, as an h5 file. However, I am getting the following error
I used the following code to successfully load the weights:
Then I tried to call
tacotron2.save("full_tacotron2.h5")
and I got the afformentioned error. Should I modify the trainers/base_trainer.py as follows and re-train or is there another way to save the entire model as an h5 file?The text was updated successfully, but these errors were encountered: