Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix issue with lift_fresh_copy when using export + compile #108243

Closed
wants to merge 4 commits into from

Commits on Aug 30, 2023

  1. fix issue with lift_fresh_copy when using export + compile

    [ghstack-poisoned]
    bdhirsh committed Aug 30, 2023
    Configuration menu
    Copy the full SHA
    8c691f8 View commit details
    Browse the repository at this point in the history
  2. Update on "fix issue with lift_fresh_copy when using export + compile"

    Fixes #105327. The problem is that `lift_fresh_copy()`'s functionalization implementation currently assumes that the input is always not functional. This is apparently too limiting: when you have "user" code like this (which can potentially come from exporting a model and then running compile on the resulting graph):
    ```
    tensor_constant0 = torch.tensor(2)
    lift_fresh = torch.ops.aten.lift_fresh_copy.default(tensor_constant0)
    ```
    
    When we run this through AOTAutograd, the first call (torch.tensor(2)) will **already** be lifted into a functional tensor wrapper - so the `lift_fresh_copy` call doesn't need to do any "lifting" anymore - it just needs to do a clone.
    
    
    
    
    [ghstack-poisoned]
    bdhirsh committed Aug 30, 2023
    Configuration menu
    Copy the full SHA
    550018a View commit details
    Browse the repository at this point in the history
  3. Update on "fix issue with lift_fresh_copy when using export + compile"

    Fixes #105327. The problem is that `lift_fresh_copy()`'s functionalization implementation currently assumes that the input is always not functional. This is apparently too limiting: when you have "user" code like this (which can potentially come from exporting a model and then running compile on the resulting graph):
    ```
    tensor_constant0 = torch.tensor(2)
    lift_fresh = torch.ops.aten.lift_fresh_copy.default(tensor_constant0)
    ```
    
    When we run this through AOTAutograd, the first call (torch.tensor(2)) will **already** be lifted into a functional tensor wrapper - so the `lift_fresh_copy` call doesn't need to do any "lifting" anymore - it just needs to do a clone.
    
    
    
    
    [ghstack-poisoned]
    bdhirsh committed Aug 30, 2023
    Configuration menu
    Copy the full SHA
    5bf740b View commit details
    Browse the repository at this point in the history

Commits on Sep 5, 2023

  1. Update on "fix issue with lift_fresh_copy when using export + compile"

    Fixes #105327. The problem is that `lift_fresh_copy()`'s functionalization implementation currently assumes that the input is always not functional. This is apparently too limiting: when you have "user" code like this (which can potentially come from exporting a model and then running compile on the resulting graph):
    ```
    tensor_constant0 = torch.tensor(2)
    lift_fresh = torch.ops.aten.lift_fresh_copy.default(tensor_constant0)
    ```
    
    When we run this through AOTAutograd, the first call (torch.tensor(2)) will **already** be lifted into a functional tensor wrapper - so the `lift_fresh_copy` call doesn't need to do any "lifting" anymore - it just needs to do a clone.
    
    
    
    
    [ghstack-poisoned]
    bdhirsh committed Sep 5, 2023
    Configuration menu
    Copy the full SHA
    7235651 View commit details
    Browse the repository at this point in the history