Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Higher train loss and worse evaluation metrics when using torch.compile() #113180

Open
snarayan21 opened this issue Nov 7, 2023 · 14 comments
Open
Assignees
Labels
high priority module: pt2 accuracy needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@snarayan21
Copy link
Contributor

snarayan21 commented Nov 7, 2023

馃悰 Describe the bug

We are facing issues with loss curves and reproducibility when using torch.compile() with our models. Attached below is a graph of train loss with runs with torch.compile() (higher loss) and runs without (lower loss). This model is an MPT-style transformer, but we've also seen the issue occur with evaluation for an autoencoder setup (also shown below). Would love to address this issue as soon as possible!

Higher train loss:
Screenshot 2023-11-02 at 8 28 39 AM

Worse eval scores (orange and turquoise are with torch.compile():
Screenshot 2023-11-02 at 10 15 57鈥疨M (1)

Error logs

Here's the error log we get from running python minifier_launcher.py:

File "/mnt/workdisk/saaketh/mpi_stuff/pytorch/torch/_prims/__init__.py", line 2366, in _xor_sum_aten
    raise NotImplementedError("xor_sum only implemented with inductor")
NotImplementedError: xor_sum only implemented with inductor

Minified repro

import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
import torch._inductor.inductor_prims

import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
torch._dynamo.config.automatic_dynamic_shapes = False
torch._dynamo.config.suppress_errors = True
torch._inductor.config.fallback_random = True
torch._inductor.config.generate_intermediate_hooks = True



isolate_fails_code_str = None



# torch version: 2.1.0+cu121
# torch cuda version: 12.1
# torch git version: 7bcf7da3a268b435777fe87c7794c382f444e86d


# CUDA Info: 
# nvcc: NVIDIA (R) Cuda compiler driver 
# Copyright (c) 2005-2023 NVIDIA Corporation 
# Built on Mon_Apr__3_17:16:06_PDT_2023 
# Cuda compilation tools, release 12.1, V12.1.105 
# Build cuda_12.1.r12.1/compiler.32688072_0 

# GPU Hardware Info: 
# NVIDIA A100-SXM4-40GB : 4 


from torch.nn import *
class Repro(torch.nn.Module):
    def __init__(self):
        super().__init__()

    
    
    def forward(self, permute):
        return [permute]
        
def load_args(reader):
    buf0 = reader.storage('c836a62b3bfef0f83a406d6327f6af0cf3833814', 50331648, device=device(type='cuda', index=1), dtype_hint=torch.bfloat16)
    reader.tensor(buf0, (2, 2048, 48, 128), dtype=torch.bfloat16, is_leaf=True)  # permute
load_args._version = 0
mod = Repro()
if __name__ == '__main__':
    from torch._dynamo.repro.after_aot import run_repro
    run_repro(mod, load_args, accuracy=True, command='run', save_dir='/mnt/workdisk/saaketh/torch_compile_debug/run_2023_11_06_01_50_22_131795-pid_97282/minifier/checkpoints', tracing_mode='real', check_str=None)

Versions

PyTorch version: 2.2.0a0+git21b6030
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31

Python version: 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB

Nvidia driver version: 515.48.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7513 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1435.899
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5199.66
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca

Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] onnx==1.14.0
[pip3] onnxruntime==1.15.1
[pip3] optree==0.9.2
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==2.1.0+6e4932cda8
[pip3] torch==2.2.0a0+git21b6030
[pip3] torch-optimizer==0.3.0
[pip3] torchmetrics==1.0.3
[pip3] torchvision==0.16.0+cu121
[pip3] triton-nightly==2.1.0.dev20230726014945
[pip3] triton-pre-mlir==2.0.0
[conda] Could not collect

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305

@snarayan21
Copy link
Contributor Author

snarayan21 commented Nov 8, 2023

Hey, is there anything else I can provide to help solve this? This is a major issue we're seeing for many of our models at this point. Appreciate your help, thank you!

@ezyang
Copy link
Contributor

ezyang commented Nov 9, 2023

The minifier script is not helpful. Are you able to run some ablation experiments; e.g., can you try with backend="aot_eager" and see if converges that way?

@voznesenskym
Copy link
Contributor

@snarayan21 please try as directed

@snarayan21
Copy link
Contributor Author

Hey @ezyang @voznesenskym apologies for the delay in getting around to this. I just ran with backend="aot_eager" on a smaller model and this does converge (no compile: orange, overlaps with aot_eager in blue, and without aot_eager shows no change in train loss):
Screenshot 2023-11-20 at 12 36 32 PM

According to this page the issue is with TorchInductor, but how would I go about root-causing this?

Thank you for your help!

@ezyang
Copy link
Contributor

ezyang commented Nov 21, 2023

You could try running the accuracy minifier; chances are it's not going to work, but sometimes you get lucky. https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html

A full set of debug logs, ala TORCH_LOGS=+dynamo,+aot,+inductor may help. If you have instructions to reproduce the training that might help too. It converging on aot_eager is a clear indication that it's an Inductor problem. If you can try aot_eager_decomp_partition that also will give more signal if it's a decomp problem.

@ezyang ezyang added the needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user label Nov 21, 2023
@ezyang ezyang added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Nov 21, 2023
@snarayan21
Copy link
Contributor Author

I just tested this with torch 2.2.0 and the issue persists (see below). I previously did use the accuracy minifier -- would you recommend using that with backend="aot_eager"? Or is there another way to diagnose what's being compiled wrong?
Screenshot 2023-12-15 at 5 09 04鈥疨M

@Skylion007
Copy link
Collaborator

Okay, I confirmed the error does still happen with aot_eager_decomp_partition suggesting that it may be a decomp problem. How do we debug further?

@bdhirsh
Copy link
Contributor

bdhirsh commented Dec 29, 2023

@Skylion007 were you able to repro locally? The repro given above looks like a failure in the minifier.

A repro would help a bunch with narrowing down further. A few things I would try next if I could repro are:

(1) also run with backend="aot_eager".:

(1a) if it passes, then... there are still a couple options, but one likely culprit is one of the inductor decomps, which are run in aot_eager_decomp_partition but not aot_eager. You could bisect them by removing decomps from here.

(1b) If it fails, but compile(backend="eager") passes, then there are also a few options: AOTAutograd bug, functionalization bug, custom ops issues, and a few others. In this case, one useful thing to check would be if there are any (non-ATen) custom operators in your model.

@Skylion007
Copy link
Collaborator

Skylion007 commented Dec 29, 2023

@bdhirsh I was able to repro locally, and I some rough repo have instructions now and it should repro in as little as half an hour of training:

  1. Follow the instructions to train the MosaicML BERT model here on C4: https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert
  2. Add the kwarg to this trainer: https://github.com/mosaicml/examples/blob/7003793b15ad0ee28bc09d0fceb91eb2d0104961/examples/benchmarks/bert/main.py#L248 compile_config = {"backend": "aot_eager_decomp_partition"}.
  3. Compare results with BERT with the compile_config set to None and the compile_config. You will notice that they behave very differently. You should notice divergence should be obvious in as little as a 1000 batches (approx 15 minutes on an 8A100 machine).

Let me know if you need more details to repro.

@bdhirsh
Copy link
Contributor

bdhirsh commented Jan 2, 2024

@Skylion007 I haven't been able to fully repro. Pasting what I've done so far + current issues below:

** Stuff I did so far **

Mostly just followed the readme steps + hit a few snags (jotting them down here)

Current issue

composer main.py yamls/main/mosaic-bert-base-uncased.yaml is no longer running properly for me - I now get this issue:

Initializing model...
n_params=1.3740e+08
Building train loader...
Traceback (most recent call last):
  File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/main.py", line 272, in <module>
    main(cfg)
  File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/main.py", line 177, in main
    train_loader = build_dataloader(
  File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/main.py", line 134, in build_dataloader
    return text_data_module.build_text_dataloader(cfg, tokenizer,
  File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/src/text_data.py", line 274, in build_text_dataloader
    dataset = StreamingTextDataset(
  File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/src/text_data.py", line 134, in __init__
    super().__init__(
  File "/home/hirsheybar/local/b/pytorch-env/lib/python3.10/site-packages/streaming/base/dataset.py", line 325, in __init__
    self._shm_prefix, self._locals_shm = get_shm_prefix(my_locals, world)
  File "/home/hirsheybar/local/b/pytorch-env/lib/python3.10/site-packages/streaming/base/shared.py", line 340, in get_shm_prefix
    raise ValueError(f'Reused local directory: {sorted(my_locals_set)} vs ' +
ValueError: Reused local directory: ['/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/my-copy-c4/train_small'] vs ['/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/my-copy-c4/train_small']. Provide a different one.

I'm not really sure how to interpret that error message. At one point I added a breakpoint() inside dynamo, which was probably a mistake since the repro is running a distributed harness (no idea if that's related to the issue that I'm now seeing though).

To be clear, I wasn't seeing that error message a few days ago but I am now. I tried this:

rm -rf ./my-copy-c4
python src/convert_dataset.py --dataset c4 --data_subset en --out_root ./my-copy-c4 --splits train_small val

But I'm getting the same error.

@Skylion007
Copy link
Collaborator

Skylion007 commented Jan 4, 2024

Okay yeah so you need to delete your local directory if you change the dataset at all or if you previous convert_dataset failed for any reason. so you probably just need to delete /data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/my-copy-c4/train_small and regenerate the dataset. Essentially, it will not overwrite a local directory has a cached directory as this is usually an error, so you will need to just regenerate a new one.

I started new PR here that removes the triton dependency feel free to give it a whirl: mosaicml/examples#440 and removes other problematic dependencies. You can also train wihtout the FlashAttention dependencies (on a way, way smaller batch size) and I suspect you will run into the same issue). You also do not need to install apex anymore, if you are PyTorch 2.0> you can switch the algorithm in the yaml to LowPrecisionLayerNorm instead. I will update the YAML in the PR to use that.
@bdhirsh

@Skylion007
Copy link
Collaborator

I just realized Stable Diffusion and BERT are both skipped in the latest benchmark tests so it's possible the issue could be more widespread:

"stable_diffusion_unet",

@soumith
Copy link
Member

soumith commented Feb 13, 2024

@bdhirsh's fix here might have also fixed this issue: #116935 (comment)

fingers crossed.

@chauhang
Copy link
Contributor

@Skylion007 @snarayan21 Can you please test with the latest nightlies and see if the issue has been resolved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
high priority module: pt2 accuracy needs reproduction Someone else needs to try reproducing the issue given the instructions. No action needed from user oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

9 participants