Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference btw decode_tacotron2.py vs tacotron2_inference.ipynb #72

Closed
jun-danieloh opened this issue Jul 1, 2020 · 6 comments
Closed
Assignees
Labels
question ❓ Further information is requested
Projects

Comments

@jun-danieloh
Copy link

The procedure of running inference of tacotron2 is not clear. It appears that running only "tacotron2_inference.ipynb" looks enough to me. What is "decode_tacotron2.py" for? Even running this code gives me errors as below.

2020-07-01 01:59:24.756772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow d
evice (/job:localhost/replica:0/task:0/device:GPU:0 with 299 MB memory) -> physical GPU (device: 0, name:
 GeForce GTX 1080 Ti, pci bus id: 0000:08:00.0, compute capability: 6.1)
2020-07-01 01:59:43.496259: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully 
opened dynamic library libcublas.so.10                                 
2020-07-01 01:59:43.723366: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully 
opened dynamic library libcudnn.so.7
2020-07-01 01:59:44.157435: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn han
dle: CUDNN_STATUS_INTERNAL_ERROR                                  
2020-07-01 01:59:44.167314: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn han
dle: CUDNN_STATUS_INTERNAL_ERROR                                       
2020-07-01 01:59:44.172894: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn han
dle: CUDNN_STATUS_INTERNAL_ERROR                    
2020-07-01 01:59:44.175197: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn han
dle: CUDNN_STATUS_INTERNAL_ERROR   
Traceback (most recent call last):                                                             [105/1355]
  File "examples_models/tacotron2/decode_tacotron2.py", line 136, in <module>                            
    main()                                                                                               
  File "examples_models/tacotron2/decode_tacotron2.py", line 103, in main                                
    tacotron2._build()  # build model to be able load_weights.                                           
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_tts/models/tacotron2.py", line 677, in _build  
    self(input_ids, input_lengths, speaker_ids, mel_outputs, mel_lengths, 10, training=True)             
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 968, $
n __call__                                                                                               
    outputs = self.call(cast_inputs, *args, **kwargs)                                                    
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __$
all__                                                                                                    
    result = self._call(*args, **kwds)                                                                   
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 644, in _c$
ll                                                                                                       
    return self._stateless_fn(*args, **kwds)                                                             
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2420, in __cal$
__                           
    return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1665, in _filt$
red_call                     
    self.captured_inputs)    
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1746, in _call$
flat                         
    ctx, args, cancellation_manager=cancellation_manager))
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 598, in call
    ctx=ctx)                
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_exe
cute                        
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
  (0) Unknown:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize,
 so try looking to see if a warning log message was printed above.
         [[node encoder/conv_batch_norm/tf_tacotron_conv_batch_norm/conv_._0/conv1d (defined at /usr/loca
l/lib/python3.6/dist-packages/tensorflow_tts/models/tacotron2.py:86) ]]
         [[decoder/while/body/_1/decoder_cell/assert_positive/assert_less/Assert/AssertGuard/pivot_f/_265
/_47]]                      
  (1) Unknown:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize,
 so try looking to see if a warning log message was printed above.
         [[node encoder/conv_batch_norm/tf_tacotron_conv_batch_norm/conv_._0/conv1d (defined at /usr/loca
l/lib/python3.6/dist-packages/tensorflow_tts/models/tacotron2.py:86) ]]
0 successful operations.     
0 derived errors ignored. [Op:__inference_call_8385]
                                
Function call stack:               
call -> call      
@dathudeptrai dathudeptrai self-assigned this Jul 1, 2020
@dathudeptrai dathudeptrai added the question ❓ Further information is requested label Jul 1, 2020
@dathudeptrai dathudeptrai added this to In progress in Tacotron 2 Jul 1, 2020
@dathudeptrai
Copy link
Collaborator

@jun-danieloh the error may because the GPU is used by another process i think. About the difference, tacotron2_inference demo the real inference on deployment including save it to pb file then load and do synthesis while decode_tacotron2 inference with input is the folder ids and it used in case you want inference all samples on the folders :D.

@jun-danieloh
Copy link
Author

@dathudeptrai

Thanks for your quick response. Now I understand the diferences.

I have more questions. What's the output of "tacotron2_inference"? Is it mel? Then do I need to take this mel output to Griffin_lim inference code?

@dathudeptrai
Copy link
Collaborator

@jun-danieloh yeah, you can use that mel and pass to GL inference to check the output. You can also train vocoder like melgan or multiband melgan as vocoder :D. GL is bad quality and just use to check :D

@jun-danieloh
Copy link
Author

@dathudeptrai Got it! Thanks for your clarity.

Last question. Do you know which tensorflow version is working properly with this repo, https://github.com/Rayhane-mamah/Tacotron-2? I saw your post in this repo so ask. I guess it would be 1.15? :)

@dathudeptrai
Copy link
Collaborator

:))) at the time i use that repo, i used tf 1.10

@jun-danieloh
Copy link
Author

@dathudeptrai Thanks :)

@dathudeptrai dathudeptrai moved this from In progress to Done in Tacotron 2 Jul 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question ❓ Further information is requested
Projects
Tacotron 2
  
Done
Development

No branches or pull requests

2 participants