New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dynamo] Error "Inference tensors do not track version counter" in inference_mode w/ llama7b #101151
Comments
Inference mode is known to be pretty broken with torch.compile today, and we should fix it. The easiest thing to do is to make inference mode a no-op inside of torch.compile: inference_mode is supposed to make your hotpath code cheaper by not having to store autograd metadata at runtime - but this is something we should be able to always avoid when using torch.compile (since we traced autograd ahead of time). I think inference_mode today just breaks during compilation. |
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 [ghstack-poisoned]
It looks like inference_mode wasn't playing well with functionalization. If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor. Hopefully fixes #101151 Pull Request resolved: #101219 Approved by: https://github.com/albanD, https://github.com/ezyang
Hi @bdhirsh , I compiled pytorch with your PR branch with commit a469373.Here is the output. While executing %l__self___layers_0_self_attn_q_proj : [#users=1] = call_module[target=L__self___layers_0_self_attn_q_proj](args = (%to_1,), kwargs = {}) You can suppress this exception and fall back to eager by setting: And I also try to repro on similar issue #103132 While executing %matmul : [#users=1] = call_function[target=torch.matmul](args = (%transpose, %transpose_3), kwargs = {}) You can suppress this exception and fall back to eager by setting: |
Thanks for the callout @ZhaoqiongZ - I have another fix here that should fix this one (also linked in the new issue) #103275 |
I just recently faced an this issue in
|
Add a test case for passing llama through the`turbine_cpu` backend. This replaces all fairscale layers with corresponding vanilla torch layers for simplicity, but we can add these back later once we have llama working. Also removes the `@torch.inference_mode()` decorator to avoid the issue documented [here](pytorch/pytorch#101151), which is not necessarily relevant to the quality of our pipeline.
@cs-mshah did you find a working solution ? |
馃悰 Describe the bug
Transformers: 4.29.0
Repro:
Error log:
Versions
PyTorch version: 2.1.0a0+gitb536c40
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 2600.000
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] intel-extension-for-pytorch==2.1.0+git3642f0c
[pip3] mypy==0.960
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.1
[pip3] torch==2.1.0a0+gitd1f0c8e
[conda] intel-extension-for-pytorch 2.1.0+git3642f0c dev_0
[conda] mkl 2023.0.0 pypi_0 pypi
[conda] mkl-devel 2023.0.0 pypi_0 pypi
[conda] mkl-include 2023.0.0 pypi_0 pypi
[conda] mkl-static 2023.0.0 pypi_0 pypi
[conda] numpy 1.23.1 pypi_0 pypi
[conda] torch 2.1.0a0+gitd1f0c8e dev_0
cc @ezyang @soumith @msaroufim @wconstab @ngimel @bdhirsh @anijain2305
The text was updated successfully, but these errors were encountered: