You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the example for converting the Tensor of rank 3 to rank 2, a combination of static and dynamic shapes are used (based on the get_sahpe function). Is not it enough to use the dynamic shapes for this purpose as follows? What is the merit of using static shapes?
b = tf.placeholder(tf.float32, [None, 10, 32])
shape = tf.shape(b)
b = tf.reshape(b, [shape[0], shape[1] * shape[2]])
The text was updated successfully, but these errors were encountered:
It depends on the use case. Sometimes it's perfectly fine to use dynamic shapes. But some ops may need some dimensions of your tensors to be static. For example if you use tf.layers.dense, it needs to know the last dimension of your tensor so that it can allocate a variable for that. In these cases you'd want static shapes. I tend to use a custom get_shape function that prefers to return static shapes.
In the example for converting the Tensor of rank 3 to rank 2, a combination of static and dynamic shapes are used (based on the get_sahpe function). Is not it enough to use the dynamic shapes for this purpose as follows? What is the merit of using static shapes?
The text was updated successfully, but these errors were encountered: