You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all,
I have trained g2p model for my en-US dataset. I was trying to understand the internal details of model training. I want somebody to clarify the below questions?
Are we using dense layers with RELU as the activation function in model training?. If yes, I have observed in the graph having "conv1" unit in the feed forward network.
"encoder/layer_0/ffn/conv1/kernel/Initializer/random_uniform". What is "conv1" unit here?
Can we use RNN with GRU cell instead of FFN. How much improvement can we expect?
Please reply to the above questions.
The text was updated successfully, but these errors were encountered:
Hi all,
I have trained g2p model for my en-US dataset. I was trying to understand the internal details of model training. I want somebody to clarify the below questions?
"encoder/layer_0/ffn/conv1/kernel/Initializer/random_uniform". What is "conv1" unit here?
Please reply to the above questions.
The text was updated successfully, but these errors were encountered: