You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
return self._op(*args, **kwargs or {})
File "/opt/conda/lib/python3.9/site-packages/torch/_prims/init.py", line 292, in _backend_select_impl
return meta(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_prims/init.py", line 2410, in _iota_meta
return torch.empty(
File "/opt/conda/lib/python3.9/site-packages/colossalai/lazy/lazy_init.py", line 473, in wrapper
return self.tensor_cls(target, *args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/colossalai/lazy/lazy_init.py", line 162, in new
meta_data = MetaTensor(elem, device=device)
File "/opt/conda/lib/python3.9/site-packages/colossalai/_analyzer/_subclasses/meta_tensor.py", line 60, in new
r = torch.Tensor._make_wrapper_subclass(
RuntimeError: !check_has_torch_dispatch(obj) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/python_variable.cpp":1934, please report a bug to PyTorch. While HermeticPyObject was enabled, we attempted to create a tensor subclass with torch_dispatch. This violates the invariant that operations in HermeticPyObject have equivalent C++ implementations. If your operator registered from Python operator registration isn't doing anything strange, there may be an internal PyTorch bug involving not appropriately disabling TorchDispatchMode before executing Python op registration.
Environment
No response
The text was updated successfully, but these errors were encountered:
From the error log, it seems that you were using torch>=2.0 and colossalai<=0.3.2. From torch 2.0, torch dispatch for subclass of tensor were disabled and so that MetaTensor won't be used in lazy init in ColossalAI. I would recommend to install the latest version of colossalai. Any version higher than v0.3.2 should fix the reported runtime error.
Thanks @yuanheng-zhao
There seems to be a duplicate issue #5673 even with the newest version, though we haven't been developing the auto-parallel API for a while馃槀
馃悰 Describe the bug
return self._op(*args, **kwargs or {})
File "/opt/conda/lib/python3.9/site-packages/torch/_prims/init.py", line 292, in _backend_select_impl
return meta(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/torch/_prims/init.py", line 2410, in _iota_meta
return torch.empty(
File "/opt/conda/lib/python3.9/site-packages/colossalai/lazy/lazy_init.py", line 473, in wrapper
return self.tensor_cls(target, *args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/colossalai/lazy/lazy_init.py", line 162, in new
meta_data = MetaTensor(elem, device=device)
File "/opt/conda/lib/python3.9/site-packages/colossalai/_analyzer/_subclasses/meta_tensor.py", line 60, in new
r = torch.Tensor._make_wrapper_subclass(
RuntimeError: !check_has_torch_dispatch(obj) INTERNAL ASSERT FAILED at "../torch/csrc/autograd/python_variable.cpp":1934, please report a bug to PyTorch. While HermeticPyObject was enabled, we attempted to create a tensor subclass with torch_dispatch. This violates the invariant that operations in HermeticPyObject have equivalent C++ implementations. If your operator registered from Python operator registration isn't doing anything strange, there may be an internal PyTorch bug involving not appropriately disabling TorchDispatchMode before executing Python op registration.
Environment
No response
The text was updated successfully, but these errors were encountered: