Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, requires_grad) #47225

Closed
wants to merge 8 commits into from

Commits on Nov 2, 2020

  1. Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, …

    …requires_grad)
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    [ghstack-poisoned]
    zou3519 committed Nov 2, 2020
    Configuration menu
    Copy the full SHA
    3432e4f View commit details
    Browse the repository at this point in the history
  2. Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…

    …e, device, requires_grad)"
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    [ghstack-poisoned]
    zou3519 committed Nov 2, 2020
    Configuration menu
    Copy the full SHA
    30ab22f View commit details
    Browse the repository at this point in the history

Commits on Nov 3, 2020

  1. Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…

    …e, device, requires_grad)"
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    [ghstack-poisoned]
    zou3519 committed Nov 3, 2020
    Configuration menu
    Copy the full SHA
    5f1af8c View commit details
    Browse the repository at this point in the history
  2. Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…

    …e, device, requires_grad)"
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    [ghstack-poisoned]
    zou3519 committed Nov 3, 2020
    Configuration menu
    Copy the full SHA
    93b0c07 View commit details
    Browse the repository at this point in the history
  3. Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…

    …e, device, requires_grad)"
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    [ghstack-poisoned]
    zou3519 committed Nov 3, 2020
    Configuration menu
    Copy the full SHA
    0c0548f View commit details
    Browse the repository at this point in the history

Commits on Nov 4, 2020

  1. Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…

    …e, device, requires_grad)"
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    [ghstack-poisoned]
    zou3519 committed Nov 4, 2020
    Configuration menu
    Copy the full SHA
    c6e75ea View commit details
    Browse the repository at this point in the history
  2. Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…

    …e, device, requires_grad)"
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    [ghstack-poisoned]
    zou3519 committed Nov 4, 2020
    Configuration menu
    Copy the full SHA
    efc78f0 View commit details
    Browse the repository at this point in the history

Commits on Nov 9, 2020

  1. Update on "Implement Tensor.new_empty_strided(sizes, strides, *, dtyp…

    …e, device, requires_grad)"
    
    Summary
    -------
    This PR implements Tensor.new_empty_strided. Many of our torch.* factory
    functions have a corresponding new_* method (e.g., torch.empty and
    torch.new_empty), but there is no corresponding method to
    torch.empty_strided. This PR adds one.
    
    Motivation
    ----------
    The real motivation behind this is for vmap to be able to work through
    CopySlices. CopySlices shows up a lot in double backwards because a lot
    of view functions have backward formulas that perform view+inplace.
    
    https://github.com/pytorch/pytorch/blob/e0fd590ec950cb1e65ea0431c9e765f8cda27908/torch/csrc/autograd/functions/tensor.cpp#L78-L106
    
    To support vmap through CopySlices, the approach in this stack is to:
    - add `Tensor.new_empty_strided` and replace `empty_strided` in
    CopySlices with that so that we can propagate batch information.
    - Make some slight modifications to AsStridedBackward (and add
    as_strided batching rule)
    
    Please let me know if it would be better if I squashed everything related to
    supporting vmap over CopySlices together into a single big PR.
    
    Test Plan
    ---------
    - New tests.
    
    Differential Revision: [D24741688](https://our.internmc.facebook.com/intern/diff/D24741688)
    
    [ghstack-poisoned]
    zou3519 committed Nov 9, 2020
    Configuration menu
    Copy the full SHA
    7ea21b5 View commit details
    Browse the repository at this point in the history