- 
                Notifications
    
You must be signed in to change notification settings  - Fork 25.8k
 
Add backward support for rudimentary NestedTensor.sum(dim) #82625
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add backward support for rudimentary NestedTensor.sum(dim) #82625
Conversation
[ghstack-poisoned]
          
🔗 Helpful links
 ✅ No Failures (0 Pending)As of commit 7ef27d3 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group.  | 
    
[ghstack-poisoned]
[ghstack-poisoned]
| 
           This was already discussed in https://docs.google.com/document/d/1dVMH33IK2C7n3Ee8LBtfWw2i8MzxkAIkvuUo_hZY63k/edit#  | 
    
[ghstack-poisoned]
        
          
                test/test_nestedtensor.py
              
                Outdated
          
        
      | self.assertEqual(nt_sum, torch.nested_tensor(ub2_sum)) | ||
| # test backward | ||
| if requires_grad: | ||
| # generate gradient tensor that has the same size as the output | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to have torch.randn_like(nt_sum) :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could I hold off on this until Driss implements empty_like in #83140, I think that the implementation for randn_like might be similar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course!
btw driss is using nt = self.random_nt(device, dtype, 4, (4, 4)) in a few of his tests. Maybe that is a good replacement?
In any case, this is a small detail that can be done later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ho you're already using that now :D
Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out. [ghstack-poisoned]
Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out. [ghstack-poisoned]
Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out. [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
I didn't look into details into the kernel though (but I guess it's fine as it's going to be removed soon).
| 
           @pytorchbot merge  | 
    
| 
           @pytorchbot successfully started a merge job. Check the current status here and land check progress here.  | 
    
| 
           Merge failed due to Failed to merge; some land checks failed: pull, pull / linux-bionic-py3.7-clang9 / test (default, 2, 2, linux.2xlarge), pull / linux-bionic-py3.7-clang9 / test (crossref, 2, 2, linux.2xlarge)  | 
    
| 
           @pytorchbot merge  | 
    
| 
           @pytorchbot successfully started a merge job. Check the current status here and land check progress here.  | 
    
| 
           Merge failed  | 
    
| 
           @pytorchbot rebase -s  | 
    
| 
           @pytorchbot successfully started a rebase job. Check the current status here  | 
    
Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out. Next steps will be to add support for other features for forward sum mentioned on #82387 and likewise update the backward [ghstack-poisoned]
| 
           Successfully rebased   | 
    
| 
           @pytorchbot merge  | 
    
| 
           @pytorchbot successfully started a merge job. Check the current status here and land check progress here.  | 
    
Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out. Next steps will be to add support for other features for forward sum mentioned on #82387 and likewise update the backward Pull Request resolved: #82625 Approved by: https://github.com/albanD
| 
           @pytorchbot merge  | 
    
| 
           @pytorchbot successfully started a merge job. Check the current status here and land check progress here.  | 
    
Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out. Next steps will be to add support for other features for forward sum mentioned on #82387 and likewise update the backward Pull Request resolved: #82625 Approved by: https://github.com/albanD
…82625) Summary: Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out. Next steps will be to add support for other features for forward sum mentioned on #82387 and likewise update the backward Pull Request resolved: #82625 Approved by: https://github.com/albanD Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/bd0ad7a84f125435f9e0685f86b1ca2efd2bd43b Reviewed By: atalman Differential Revision: D38830753 Pulled By: mikaylagawarecki fbshipit-source-id: 6e4db6ab37714e2fa2637d7213a7cbe6dfadab06
Per offline discussion, this will be updated to use expand once expand semantics for nested tensor have been fleshed out.
Next steps will be to add support for other features for forward sum mentioned on #82387 and likewise update the backward
Stack from ghstack (oldest at bottom):