Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MergeLayer #50

Closed
Qwlouse opened this issue Oct 8, 2015 · 5 comments
Closed

MergeLayer #50

Qwlouse opened this issue Oct 8, 2015 · 5 comments

Comments

@Qwlouse
Copy link
Collaborator

Qwlouse commented Oct 8, 2015

We need a layer that has multiple inputs and just concatenates them along the last feature dimension.
For CPU that one can be omitted, because the NumpyHandler supports slicing the features, but for usage with the PyCudaHandler this is the only way of merging the outputs of two layers.

Qwlouse added a commit that referenced this issue Oct 12, 2015
It might not work with a PyCudaHandler because it needs some slicing in the feature dimension....

tackles #50
@flukeskywalker
Copy link
Collaborator

@untom Can this work on the GPU at all or should it be removed?

@untom
Copy link
Collaborator

untom commented Oct 18, 2015

it can work... I'll see if I can hack one up today.

@flukeskywalker
Copy link
Collaborator

Great! Is this essentially the same issue that holds back the optimized LSTM implementation in LSTMOpt from working with PyCudaHandler?

@Qwlouse
Copy link
Collaborator Author

Qwlouse commented Oct 19, 2015

By using some extra memory this could be accomplished with only a slice-enabled copy_to (no need for add_tt then. So if that is easier we could do that. Actually I haven't even checked if copy_to works on slices...

@untom
Copy link
Collaborator

untom commented Oct 20, 2015

Implemented via f36b57d and f86bb99 (There might be faster ways to implement this on the GPU, so this might be worth revisiting if a profiling run shows that this is really a bottleneck)

@untom untom closed this as completed Oct 20, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants