-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.transpose error when trying to transpose matrix of bools (workaround below), also recommend adding tf.repeat #542
Comments
Suggestions:
|
Thanks, I didn't know about tf.tile. Will try that, and to get bool working for transpose. I think makes sense for the transpose method to work for any type of matrix that is not a numerical value. |
Yes, transpose should work for any dtype. The preferred solution is to use a suitable macro from register_types.h to make that happen. |
Looks like |
Fix in review. |
Fix broken link in inception readme
remove no_rocm tag from most distribute tests
I'm trying to implement some conditioning flows when training RNNs. Basically, once an end-of-sequence event has been detected, I reset the RNN state to zeros, and have been able to build in this logic successfully I think with tf.greater() and tf.select()
I implemented a numpy.repeat(a, repeats) sort of equivalent in tensorflow. I recommend in the future, this can be also added as a tf.repeat() function repeating a bunch of boolean values across a tensor for control flows.
Here's my implementation of a repeater (it only works for non-generalised case, for my problem, so I need to generalise it for higher dimensional tensors in the future):
The issue I have is I needed to transpose the result in the end to have the dimensions line up, and the results are all boolean values, and currently I noticed tf.transpose doesn't transpose a matrix of bool's
The workaround I have was to apply the functions to real numbers, and afterwards, make the end result into a large bool matrix, although this isn't ideal.
Workaround:
new_state
is the lstm state to be fed in next time. If the end of content state is detected in training, we reset it. This way, I can allow batches of the same length to be trained.The text was updated successfully, but these errors were encountered: