Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature suggestion] einsum with custom reductions and operations #151

Open
flawr opened this issue Nov 17, 2021 · 1 comment
Open

[Feature suggestion] einsum with custom reductions and operations #151

flawr opened this issue Nov 17, 2021 · 1 comment

Comments

@flawr
Copy link

flawr commented Nov 17, 2021

This is just an idea and I'd love to hear other opinions: I propose adding einsum to einops with a additional arguments:

reduce() has an argument reduction that allows you to specify how an a dimension is reduced to a singleton. I think the usefulness is clear to everyone here. However in the native einsum implementation in various libraries this is not possible. I propose to go even a step further, which I'll elaborate here:

A matrix-vector multiplication like

 einsum('i j, j -> i', A, x)

has two parts: On the one hand the reduction via sum, but on the other hand we have a binary operation (the product) between the entries of the two operands. As mentioned above while einops.reduce allows for custom reductions, none (afaik) of the native einsum functions allow for those, which would be very nice to have in einsum.
But my main point is that it would be very nice to see if the operation (by default the product) could also be customized.

It is maybe a far fetched example, but let's consider tropical algebra:
In tropical algebra we replace the usual + operation by max (or equivalently min) and the * operation by +.
So I'd imagine a "tropical matrix multiplication" as something like

einsum('i j, j -> k', A, x, reduction='max', operation='add')

(Note that tropical algebra looks quite esoteric but it is not: If you consider a regular convolution and look at the tropical counterpart
you get the morphological operations.)

Another example: Let's say we have a vector primes and a matrix of exponents and would like to get the actual numbers these exponents represent:

einsum('i, j i -> j', primes, exponents, reduction='prod', operation='pow')

Now we could also apply this to boolean values:

accuracy = einsum('i, i ->', predictions, ground_truth, reduction='mean', operation='and')

So with this hypothetical (so far) einsum combines the power of custom reductions that we already know and love, and combines it with the
power of custom operations that could all be executed in one go. And we would all the advantages of extended einops notation that the native einsums lack, too. (This has already
been suggested in #73, but I wanted to make an argument for the custom operation.)

@Hprairie
Copy link

If anyone is still interested in this, I create a package to do this with PyTorch. It's a little rough around the edges and currently only supports python >= 3.11 and torch >= 2.0. I will work to expand the project to more versions of python soon. https://github.com/Hprairie/einfunc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants