-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, requires_grad) #47225
Commits on Nov 2, 2020
-
Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, …
…requires_grad) Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 3432e4f - Browse repository at this point
Copy the full SHA 3432e4fView commit details -
Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…
…e, device, requires_grad)" Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 30ab22f - Browse repository at this point
Copy the full SHA 30ab22fView commit details
Commits on Nov 3, 2020
-
Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…
…e, device, requires_grad)" Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 5f1af8c - Browse repository at this point
Copy the full SHA 5f1af8cView commit details -
Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…
…e, device, requires_grad)" Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 93b0c07 - Browse repository at this point
Copy the full SHA 93b0c07View commit details -
Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…
…e, device, requires_grad)" Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 0c0548f - Browse repository at this point
Copy the full SHA 0c0548fView commit details
Commits on Nov 4, 2020
-
Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…
…e, device, requires_grad)" Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for c6e75ea - Browse repository at this point
Copy the full SHA c6e75eaView commit details -
Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…
…e, device, requires_grad)" Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for efc78f0 - Browse repository at this point
Copy the full SHA efc78f0View commit details
Commits on Nov 9, 2020
-
Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…
…e, device, requires_grad)" Summary ------- This PR implements Tensor.new_empty_strided. Many of our torch.* factory functions have a corresponding new_* method (e.g., torch.empty and torch.new_empty), but there is no corresponding method to torch.empty_strided. This PR adds one. Motivation ---------- The real motivation behind this is for vmap to be able to work through CopySlices. CopySlices shows up a lot in double backwards because a lot of view functions have backward formulas that perform view+inplace. https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106 To support vmap through CopySlices, the approach in this stack is to: - add `Tensor.new_empty_strided` and replace `empty_strided` in CopySlices with that so that we can propagate batch information. - Make some slight modifications to AsStridedBackward (and add as_strided batching rule) Please let me know if it would be better if I squashed everything related to supporting vmap over CopySlices together into a single big PR. Test Plan --------- - New tests. Differential Revision: [D24741688](https://our.internmc.facebook.com/intern/diff/D24741688) [ghstack-poisoned]
Configuration menu - View commit details
-
Copy full SHA for 7ea21b5 - Browse repository at this point
Copy the full SHA 7ea21b5View commit details