New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Higher train loss and worse evaluation metrics when using torch.compile()
#113180
Comments
Hey, is there anything else I can provide to help solve this? This is a major issue we're seeing for many of our models at this point. Appreciate your help, thank you! |
The minifier script is not helpful. Are you able to run some ablation experiments; e.g., can you try with |
@snarayan21 please try as directed |
Hey @ezyang @voznesenskym apologies for the delay in getting around to this. I just ran with According to this page the issue is with TorchInductor, but how would I go about root-causing this? Thank you for your help! |
You could try running the accuracy minifier; chances are it's not going to work, but sometimes you get lucky. https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html A full set of debug logs, ala TORCH_LOGS=+dynamo,+aot,+inductor may help. If you have instructions to reproduce the training that might help too. It converging on aot_eager is a clear indication that it's an Inductor problem. If you can try |
Okay, I confirmed the error does still happen with |
@Skylion007 were you able to repro locally? The repro given above looks like a failure in the minifier. A repro would help a bunch with narrowing down further. A few things I would try next if I could repro are: (1) also run with (1a) if it passes, then... there are still a couple options, but one likely culprit is one of the inductor decomps, which are run in (1b) If it fails, but |
@bdhirsh I was able to repro locally, and I some rough repo have instructions now and it should repro in as little as half an hour of training:
Let me know if you need more details to repro. |
@Skylion007 I haven't been able to fully repro. Pasting what I've done so far + current issues below: ** Stuff I did so far ** Mostly just followed the readme steps + hit a few snags (jotting them down here)
Current issue
I'm not really sure how to interpret that error message. At one point I added a To be clear, I wasn't seeing that error message a few days ago but I am now. I tried this:
But I'm getting the same error. |
Okay yeah so you need to delete your I started new PR here that removes the triton dependency feel free to give it a whirl: mosaicml/examples#440 and removes other problematic dependencies. You can also train wihtout the FlashAttention dependencies (on a way, way smaller batch size) and I suspect you will run into the same issue). You also do not need to install apex anymore, if you are PyTorch 2.0> you can switch the algorithm in the yaml to LowPrecisionLayerNorm instead. I will update the YAML in the PR to use that. |
I just realized Stable Diffusion and BERT are both skipped in the latest benchmark tests so it's possible the issue could be more widespread: pytorch/benchmarks/dynamo/torchbench.py Line 237 in 139c4ab
|
@bdhirsh's fix here might have also fixed this issue: #116935 (comment) fingers crossed. |
@Skylion007 @snarayan21 Can you please test with the latest nightlies and see if the issue has been resolved? |
馃悰 Describe the bug
We are facing issues with loss curves and reproducibility when using
torch.compile()
with our models. Attached below is a graph of train loss with runs withtorch.compile()
(higher loss) and runs without (lower loss). This model is an MPT-style transformer, but we've also seen the issue occur with evaluation for an autoencoder setup (also shown below). Would love to address this issue as soon as possible!Higher train loss:
Worse eval scores (orange and turquoise are with
torch.compile()
:Error logs
Here's the error log we get from running
python minifier_launcher.py
:Minified repro
Versions
PyTorch version: 2.2.0a0+git21b6030
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 515.48.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7513 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1435.899
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5199.66
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] onnx==1.14.0
[pip3] onnxruntime==1.15.1
[pip3] optree==0.9.2
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.2.0a0+git21b6030
[pip3] torch-optimizer==0.3.0
[pip3] torchmetrics==1.0.3
[pip3] torchvision==0.16.0+cu121
[pip3] triton-nightly==2.1.0.dev20230726014945
[pip3] triton-pre-mlir==2.0.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305
The text was updated successfully, but these errors were encountered: