Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KnetArray transpose and permutedims missing. #39

Closed
denizyuret opened this issue Nov 12, 2016 · 7 comments · Fixed by #474
Closed

KnetArray transpose and permutedims missing. #39

denizyuret opened this issue Nov 12, 2016 · 7 comments · Fixed by #474

Comments

@denizyuret
Copy link
Owner

No description provided.

@denizyuret
Copy link
Owner Author

Is permutedims for 4D and 5D practical? Necessary?

@denizyuret
Copy link
Owner Author

I am leaving this issue open for the general N-D implementation.

This was referenced Feb 27, 2017
@denizyuret
Copy link
Owner Author

@ilkerkesen do we actually have a general N-D implementation (beyond 5-D) that works? If so, let's close this.

@ilkerkesen
Copy link
Collaborator

ilkerkesen commented Dec 24, 2017 via email

@ilkerkesen
Copy link
Collaborator

So far so good,

  1. Current implementation works correctly.
  2. Yes, it takes advantage of parallel programming, but ignores its crucial core concepts.
  3. I did have a generic implementation, couldn't have ported it into Knet, but let's discuss it later since it is going to be as slower as the current implementation.

I've looked at PyTorch's source code and I think this is the part where transpose magic happens. As different from Julia, PyTorch doesn't have a permutedims actually, it has transpose for multi-dimensional arrays also. Let's see the differences,

  • PyTorch equivalent of permutedims(a, (1, 3, 2)) is a.transpose(1, 2) (let's do not forget Python has indexes starting from zero).
  • PyTorch equivalent of permutedims(a, (3, 1, 2)) is a.transpose(0, 2).transpose(1, 2) which is cascaded.
  • PyTorch arrays are row majored. So, we need to convert that CUBLAS calls as column majored.

As a first step, I'm going to implement a transposedim operation mimicking PyTorch's transpose method. We'll discuss the remaining features later.

@ilkerkesen
Copy link
Collaborator

Caffe2 has a multi-dimensional transpose operation in the way we desire. It takes advantage of cudnn's cudnnTransformTensor kernel. I'm on it.

denizyuret pushed a commit that referenced this issue Aug 1, 2019
@denizyuret denizyuret mentioned this issue Aug 1, 2019
@denizyuret
Copy link
Owner Author

Solved using CuArrays under permutedims branch, please test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants