Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Permute layer as suggested by loyeamen on #401 #409

Merged
merged 1 commit into from
Jul 19, 2015
Merged

Added Permute layer as suggested by loyeamen on #401 #409

merged 1 commit into from
Jul 19, 2015

Conversation

anayebi
Copy link
Contributor

@anayebi anayebi commented Jul 17, 2015

I added the Permute layer as a Core layer, as suggested by loyeamen on #401. It permutes the dimensions of the data according to the given tuple.

@fchollet
Copy link
Member

Can you add documentation, including a very short example (as part of the documentation) of the use case of this layer?

@anayebi
Copy link
Contributor Author

anayebi commented Jul 18, 2015

Sure, I'd be happy to. Would you like me to add it to the keras.io documentation page?

Anyway, here it is below:

keras.layers.core.Reshape(tuple)

Permutes the dimensions of the data according to the given tuple.

Input shape: This layer does not assume a specific input shape.
Output shape: Same as the input shape, but with the dimensions re-ordered according to the ordering specified by the tuple.
Arguments:
Tuple is a tensor that specifies the ordering of the dimensions of the data.

Example: # input shape: (nb_samples, 10)
model.add(Dense(10, 50)) # output shape: (nb_samples, 50)
model.add(Reshape(10, 5)) # output shape: (nb_samples, 10, 5)
model.add(Permute((0, 2, 1))) #output shape: (nb_samples, 5, 10)

@fchollet fchollet merged commit fea9570 into keras-team:master Jul 19, 2015
@fchollet
Copy link
Member

I modified it to exclude the samples dimension form the permutation, as it would break training. I added the documentation.

fchollet pushed a commit that referenced this pull request Sep 22, 2023
This consolidates the use of the "meta" device to compute_output_shape,
and will fall back to eager execution if the "meta" device placement
fails.

Haven't run any benchmarking yet, let me know if I should.
hubingallin pushed a commit to hubingallin/keras that referenced this pull request Sep 22, 2023
This consolidates the use of the "meta" device to compute_output_shape,
and will fall back to eager execution if the "meta" device placement
fails.

Haven't run any benchmarking yet, let me know if I should.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants