Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix aminmax output resize issue when input is a zero dimension tensor #96171

Closed
wants to merge 7 commits into from

Commits on Mar 7, 2023

  1. Configuration menu
    Copy the full SHA
    e76bc4e View commit details
    Browse the repository at this point in the history
  2. Update on "fix aminmax output resize issue when input is a zero dimen…

    …sion tensor"
    
    
    Fix #96042
    
    ### before
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
    torch.return_types.aminmax(
    min=tensor([1]),
    max=tensor([1]))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    ```
    ### after
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    
    ```
    cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10
    
    [ghstack-poisoned]
    mingfeima committed Mar 7, 2023
    Configuration menu
    Copy the full SHA
    2e66173 View commit details
    Browse the repository at this point in the history
  3. Update on "fix aminmax output resize issue when input is a zero dimen…

    …sion tensor"
    
    
    Fix #96042
    
    ### before
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
    torch.return_types.aminmax(
    min=tensor([1]),
    max=tensor([1]))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    ```
    ### after
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    
    ```
    cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10
    
    [ghstack-poisoned]
    mingfeima committed Mar 7, 2023
    Configuration menu
    Copy the full SHA
    7048311 View commit details
    Browse the repository at this point in the history

Commits on Mar 8, 2023

  1. Update on "fix aminmax output resize issue when input is a zero dimen…

    …sion tensor"
    
    
    Fix #96042
    
    ### before
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
    torch.return_types.aminmax(
    min=tensor([1]),
    max=tensor([1]))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    ```
    ### after
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    
    ```
    cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10
    
    [ghstack-poisoned]
    mingfeima committed Mar 8, 2023
    Configuration menu
    Copy the full SHA
    7a8f81e View commit details
    Browse the repository at this point in the history

Commits on Mar 9, 2023

  1. Update on "fix aminmax output resize issue when input is a zero dimen…

    …sion tensor"
    
    
    Fix #96042
    
    ### before
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
    torch.return_types.aminmax(
    min=tensor([1]),
    max=tensor([1]))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    ```
    ### after
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    
    ```
    
    Marked the following test as expected_fail:
    `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32`
    
    Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched.
    The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2).
    
    cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10
    
    [ghstack-poisoned]
    mingfeima committed Mar 9, 2023
    Configuration menu
    Copy the full SHA
    94dae64 View commit details
    Browse the repository at this point in the history
  2. Update on "fix aminmax output resize issue when input is a zero dimen…

    …sion tensor"
    
    
    Fix #96042
    
    ### before
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
    torch.return_types.aminmax(
    min=tensor([1]),
    max=tensor([1]))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    ```
    ### after
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    
    ```
    
    Marked the following test as expected_fail:
    `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32`
    
    Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched.
    The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2).
    
    cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10
    
    [ghstack-poisoned]
    mingfeima committed Mar 9, 2023
    Configuration menu
    Copy the full SHA
    dc73df2 View commit details
    Browse the repository at this point in the history

Commits on Mar 15, 2023

  1. Update on "fix aminmax output resize issue when input is a zero dimen…

    …sion tensor"
    
    
    Fix #96042
    
    ### before
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    __main__:1: UserWarning: An output with one or more elements was resized since it had shape [], which does not match the required output shape [1]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
    torch.return_types.aminmax(
    min=tensor([1]),
    max=tensor([1]))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    ```
    ### after
    ```
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=True)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    >>> torch.aminmax(torch.tensor(1, device='cpu'), dim=0, keepdim=False)
    torch.return_types.aminmax(
    min=tensor(1),
    max=tensor(1))
    
    ```
    
    Marked the following test as expected_fail:
    `test_vmap.py TestVmapOperatorsOpInfoCPU.test_op_has_batch_rule_aminmax_cpu_float32`
    
    Given input shape of (2), the loop out is shape (2), the batched vmap out is (2, 1), which mismatched.
    The loop out will calculate twice on a tensor shape of ( ): without this patch, the output is (1), and then stacked into (2, 1); with this patch, the output is ( ), then stacked into (2).
    
    cc jgong5 XiaobingSuper sanchitintel ashokei jingxu10
    
    [ghstack-poisoned]
    mingfeima committed Mar 15, 2023
    Configuration menu
    Copy the full SHA
    efc5fa6 View commit details
    Browse the repository at this point in the history