Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior from torchscript (mixing trace with script) #89483

Open
priyamtejaswin opened this issue Nov 22, 2022 · 1 comment
Open
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue

Comments

@priyamtejaswin
Copy link

priyamtejaswin commented Nov 22, 2022

馃悰 Describe the bug

Hi,

I have encountered some unexpected behavior with mixing torch.jit.script and torch.jit.trace. Here鈥檚 a example to reproduce.

import torch
import numpy as np

@torch.jit.script
def select_rows(
    nums: int,
    data: torch.Tensor,
    size: int
):
    valid_choice = torch.multinomial(torch.ones(nums).float(), size)
    return data[valid_choice]

def do_selection(x):
    return select_rows(x.shape[0], x, x.shape[0])

t_4 = torch.tensor(np.array([1, 2, 3, 4]))
t_7 = torch.tensor(np.array([1, 2, 3, 4, 5, 6, 7]))

traced_selection = torch.jit.trace(do_selection, t_4)

print(traced_selection(t_4))
>>> tensor([3, 1, 2, 4])  # A random arrangement of the input data.

print(traced_selection(t_7))
>>> tensor([1, 3, 2, 4])  
# Another random arrangement, but of the TRACED EXAMPLE!
# Expected a random arrangement of the current input of size 7!

In my actual example, def do_selection() is extremely complicated, and cannot be scripted using torch.jit.script . What are my options here? Is this the expected behavior?

Thanks.

Versions

PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 12.4 (x86_64)
GCC version: Could not collect
Clang version: 13.0.0 (clang-1300.0.27.3)
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.13 (main, Aug 25 2022, 18:29:29) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytorch-lightning==1.1.4
[pip3] torch==1.10.0
[pip3] torchaudio==0.10.0
[pip3] torchmetrics==0.10.0rc0
[pip3] torchvision==0.11.0
[conda] numpy 1.19.5 pypi_0 pypi
[conda] pytorch-lightning 1.1.4 pypi_0 pypi
[conda] torch 1.10.0 pypi_0 pypi
[conda] torchaudio 0.10.0 pypi_0 pypi
[conda] torchmetrics 0.10.0rc0 pypi_0 pypi
[conda] torchvision 0.11.0 pypi_0 pypi

cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel

@sanchitintel
Copy link
Collaborator

It's as expected because do_selection was JIT-traced & select_rows is being called from it with constant input parameters for nums & size.

@H-Huang H-Huang added the oncall: jit Add this issue/PR to JIT oncall triage queue label Nov 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

No branches or pull requests

3 participants