This repository was archived by the owner on Aug 21, 2025. It is now read-only.

Description
It's that autograd assert that we run into often:
import torch
from functorch import make_fx
from functorch.compile import nnc_jit
def f(x, y):
return torch.broadcast_tensors(x, y)
inp1 = torch.rand(())
inp2 = torch.rand(3)
print(f(inp1, inp2)) # without nnc compile everything works fine
print(make_fx(f)(inp1, inp2)) # fails
print(nnc_jit(f)(inp1, inp2))
# RuntimeError: self__storage_saved.value().is_alias_of(result.storage())INTERNAL ASSERT FAILED at "autograd/generated/VariableType_3.cpp":3899, please report a bug to PyTorch.
cc @albanD @soulitzer what's the chance we can add an option to turn these off? They've been more harmful (e.g. prevent debugging in debug mode) than useful for us.