-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use DynamicRNNLayer ? #18
Comments
@narrator-wong Does The If your number of step is fixed, For dynamic rnn, In your case, as your batch_size is 4, you can define your placeholder as follow.
But in your case, it seem that, every sentences have a single label (you don't have a target sequence), I think you should use the last output for classification ? i.e. If you use all the outputs (you have an input sequence and target sequence), you will need a mask to define the cost function, here is an example from Google im2txt.
If you feel the code from Google is difficult, you can define the mask by yourself, and use More details about dynamic_rnn ops, rnn vs dynamic_rnn Feel free to let us know when you have problem. |
@wagamamaz |
@narrator-wong Hi, I have an Image Captioning example for TensorLayer, hope it help https://github.com/zsdonghao/Image-Captioning |
@zsdonghao how to write stack DynamicRNN?
|
@narrator-wong You don't need to stack DynamicRNNLayer like that, you can use the dropout is implemented by |
@zsdonghao Thanks, i think i got it... |
hello,i hava the same problem with you ,but i use the RNNLayer: def define_layers(self,is_training=False): my input shape also have padding with zero just like: File "main/lstm_units.py", line 90, in fit so i want to know where the problem is? Thank you very much |
@qjc937044867 why you have a |
@zsdonghao because i had packaged all network and fit/preditc into a class, so i can use them just like 'sklearn' way. As follow:
And i have find another problem in EmbeddingInputlayer, how can i use the embedings which has been pre-trained with wiki. Now i use like this, where self.embs.embs is the embedings_weights
but i got the problem as follow,and i don't know why: Thanks for you attention, and looking forward to receiving your soonest reply. |
@qjc937044867 you cannot use |
ok, i will try . And i have find another problem in EmbeddingInputlayer, how can i use the embedings which has been pre-trained with wiki. Now i use like this, where self.embs.embs is the embedings_weights network = tl.layers.EmbeddingInputlayer( but i got the problem as follow,and i don't know why: thank you very much! |
you can find this example in here as well. |
@zsdonghao sorry,i haven't found the right way to use the pre-trained embedings in those file. And when i try to use RNNLayer in PTB way,i got another problem, my code as follow:
Error report as : i know i should try to solve this problem by myself,but i have to finish it in two days, so please help me,thank you very much! |
@zsdonghao ok,thank you. |
tl.cost refactored, tl.initializers refactored tested doced
Thanks for your debug, but...
"max_length = tf.shape(self.outputs)[1]
self.outputs = tf.reshape(tf.concat(1, outputs), [-1, max_length, n_hidden])", this is your new code
but also wrong, "max_length = tf.shape(self.outputs)[1]" no self.outputs, maybe like this
"max_length = tf.shape(outputs)[1]"
i can not find some DynamicRNNLayer example, the examples:
i can not understand "shape=[batch_size, None]" i just write None? or n_step(max)
if my data like this: (Sentiment Analysis)
i padding with zero
how to use DynamicRNNLayer?
The text was updated successfully, but these errors were encountered: