Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

jacrev and jacfwd fail for torch.roll when input is a scalar #94925

Closed
cafffeeee opened this issue Feb 15, 2023 · 0 comments
Closed

jacrev and jacfwd fail for torch.roll when input is a scalar #94925

cafffeeee opened this issue Feb 15, 2023 · 0 comments
Assignees
Labels
module: functorch Pertaining to torch.func or pytorch/functorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@cafffeeee
Copy link

cafffeeee commented Feb 15, 2023

馃悰 Describe the bug

jacrev and jacfwd fail for torch.roll when input is a scalar

import torch
from torch.func import jacrev

x = torch.tensor(0.0)

def func(x):
    y = torch.roll(x, 1)
    return y

print(jacrev(func)(x))
# IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

However, the jacobian function will succeed and returns the correct gradient

import torch
from torch.autograd.functional import jacobian

x = torch.tensor(0.0)

def func(x):
    y = torch.roll(x, 1)
    return y

print(jacobian(func, x))
# tensor(1.)

Versions

PyTorch version: 2.0.0.dev20230105
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35

Python version: 3.9.15 (main, Nov 24 2022, 14:31:59)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090

Nvidia driver version: 515.86.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.0.0.dev20230105
[pip3] torchaudio==2.0.0.dev20230105
[pip3] torchvision==0.15.0.dev20230105
[conda] blas                      1.0                         mkl
[conda] mkl                       2021.4.0           h06a4308_640
[conda] mkl-service               2.4.0            py39h7f8727e_0
[conda] mkl_fft                   1.3.1            py39hd3c417c_0
[conda] mkl_random                1.2.2            py39h51133e4_0
[conda] numpy                     1.23.5           py39h14f4228_0
[conda] numpy-base                1.23.5           py39h31eccc5_0
[conda] pytorch                   2.0.0.dev20230105 py3.9_cuda11.7_cudnn8.5.0_0    pytorch-nightly
[conda] pytorch-cuda              11.7                 h67b0de4_2    pytorch-nightly
[conda] pytorch-mutex             1.0                        cuda    pytorch-nightly
[conda] torchaudio                2.0.0.dev20230105      py39_cu117    pytorch-nightly
[conda] torchtriton               2.0.0+0d7e753227            py39    pytorch-nightly
[conda] torchvision               0.15.0.dev20230105      py39_cu117    pytorch-nightly

cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @janeyx99 @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @lezcano @Varal7

@ngimel ngimel added module: autograd Related to torch.autograd, and the autograd engine in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Feb 16, 2023
@albanD albanD added module: functorch Pertaining to torch.func or pytorch/functorch and removed module: autograd Related to torch.autograd, and the autograd engine in general labels Feb 16, 2023
@kshitij12345 kshitij12345 self-assigned this Feb 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: functorch Pertaining to torch.func or pytorch/functorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants