Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multilinear map #26527

Open
santient opened this issue Sep 20, 2019 · 6 comments
Open

Multilinear map #26527

santient opened this issue Sep 20, 2019 · 6 comments
Labels
function request A request for a new function or the addition of new arguments/modes to an existing function. module: nn Related to torch.nn triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@santient
Copy link

santient commented Sep 20, 2019

馃殌 Feature

Multilinear layer: a generalization of the Linear and Bilinear layers/functions to any configurable number of variables. This should be a learnable multilinear map that does the equivalent of this function, plus a bias (from the Multilinear map Wikipedia page):
Screenshot from 2019-09-20 02-02-47

Details

Implement the following:

torch.nn.functional.multilinear((input1, ..., inputN), weight, bias=None)
torch.nn.Multilinear((in1_features, ..., inN_features), out_features, bias=True)

I think the rank of the weight tensor would be the number of input features plus one.

Motivation

This is useful for situations where you want to combine more than two vectors using a learnable function that is linear in all variables. A good use case would be tensor fusion with more than two variables.

I plan to implement this for use in a project. It should be fairly easy to implement using something like torch.einsum, but a native C++ implementation could speed things up significantly.

Should I submit a PR when this is done?

cc @albanD @mruberry

@colesbury colesbury added enhancement Not as big of a feature, but technically not a bug. Should be easy to fix module: operators triage review labels Sep 20, 2019
@ezyang
Copy link
Contributor

ezyang commented Sep 23, 2019

FWIW, einsum is implemented in C++. (Tensor networks may be relevant). Also cc @zou3519

@colesbury
Copy link
Member

@ezyang and @fmassa suggest implementing this with einsum for now.

@VitalyFedyunin VitalyFedyunin added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 24, 2019
@acturner
Copy link

@santient Is this implemented? If so, could you post a link to your implementation or PR?

@santient
Copy link
Author

santient commented Oct 1, 2020

@acturner Forgot about this issue but I just wrote up a quick implementation here. I found no need to use einsum. Lmk if you notice a mistake or any bugs!

@acturner
Copy link

acturner commented Oct 3, 2020

@acturner Forgot about this issue but I just wrote up a quick implementation here. I found no need to use einsum. Lmk if you notice a mistake or any bugs!

Thanks so much!

@santient
Copy link
Author

santient commented Oct 4, 2020

Just FYI, I updated my implementation using einsum to support batched vectors

@mruberry mruberry added function request A request for a new function or the addition of new arguments/modes to an existing function. module: nn Related to torch.nn and removed enhancement Not as big of a feature, but technically not a bug. Should be easy to fix module: operators (deprecated) labels Oct 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
function request A request for a new function or the addition of new arguments/modes to an existing function. module: nn Related to torch.nn triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

6 participants