Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] Operator Overloading #12764

Open
Florian1990 opened this issue Oct 17, 2018 · 2 comments
Open

[feature request] Operator Overloading #12764

Florian1990 opened this issue Oct 17, 2018 · 2 comments
Labels
feature A request for a proper, new feature. todo Not as important as medium or high priority tasks, but we will work on these. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@Florian1990
Copy link

I realized that PyTorch does partly prevent operator overloading by raising a TypeError rather than returning NotImplemented. Therefore Python never checks object methods implementing operations with reflected operands, e. g. __rmul__:

In[1]: import torch
In[2]: class Two:
           def __mul__(self, other):
               return other * 2
           def __rmul__(self, other):
               return self * other
           
In[3]: two = Two()
In[4]: two * 3
Out[4]: 6
In[5]: 3 * two
Out[5]: 6
In[6]: two * torch.tensor(3)
Out[6]: tensor(6)
In[7]: torch.tensor(3) * two
Traceback (most recent call last):
  File "/path/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2961, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-8-e59deefe7a13>", line 1, in <module>
    torch.tensor(3) * two
TypeError: mul() received an invalid combination of arguments - got (Two), but expected one of:
 * (Tensor other)
      didn't match because some of the arguments have invalid types: (!Two!)
 * (float other)
      didn't match because some of the arguments have invalid types: (!Two!)

Is this behavior intended (e. g. for performance reasons)? I would prefer PyTorch returning NotImplemented (which might result in raising a TypeError) rather than directly raising a TypeError.

@apaszke
Copy link
Contributor

apaszke commented Oct 17, 2018

Yes, it's done for performance reasons. I guess we could wrap the implementation in a try: except: and change the exception into a return value.

@zou3519 zou3519 added the todo Not as important as medium or high priority tasks, but we will work on these. label Oct 22, 2018
@teymour-aldridge
Copy link

teymour-aldridge commented Mar 28, 2020

Hi,
I saw this thread and was just wondering whether this had been implemented?
Thank you for building such an awesome machine learning framework!

@ailzhang ailzhang added feature A request for a proper, new feature. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Mar 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature A request for a proper, new feature. todo Not as important as medium or high priority tasks, but we will work on these. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants