-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of python collect_env.py
Collecting environment information...
PyTorch version: 2.8.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04.2) 12.3.0
Clang version: 16.0.6 (++20231112100510+7cbf1a259152-1~exp1~20231112100554.106)
CMake version: version 4.1.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1040-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: ARM
Model name: Neoverse-V2
Model: 1
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 1
Stepping: r0p1
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti
L1d cache: 6 MiB (96 instances)
L1i cache: 6 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.6
[pip3] torch==2.8.0+cpu
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[conda] Could not collect
🐛 Describe the bug
On AArch64 (e.g. Neoverse-V2), when VLLM_CPU_OMP_THREADS_BIND is set, only the first core is utilized at 100% while the other cores have very low utilization.
- This behavior only happens when
VLLM_CPU_OMP_THREADS_BINDis set - The low utilization goes away if we preload libgomp, i.e.
LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1
Example reproducer:
VLLM_CPU_OMP_THREADS_BIND=0-63 VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 vllm bench throughput --num-prompts 64 --seed 0 --dataset-name sharegpt --max-model-len 4096 --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --model meta-llama/Llama-3.1-8B-Instruct --load-format dummy
Root cause(s):
Running the reproducer above with LD_DEBUG=libs,files shows that there's 2 libgomp runtimes in use:
libtorch.soloadslibgomp-947d5fa1.so.1.0.0(this is the libgomp that gets shipped with the PyTorch wheel)libarm_compute.so(now built with vllm since [cpu] Dispatch un-quantized linear to oneDNN/ACL by default for AArch64 #27183): loads system libgomp:/usr/lib/aarch64-linux-gnu/libgomp.so.1/tmp/torchinductor_fadara01/t3/ct3zsq772eznxbeuvpmce7lpbxrem55qxducdrdo2io7itgal3sq.main.so(from inductor which is torch.compile backend for CPU) loads system libgomp:/usr/lib/aarch64-linux-gnu/libgomp.so.1
Suggested Fix:
All .so libraries should link against PyTorch's libgomp (libgomp-947d5fa1.so.1.0.0)
- Modify Arm Compute Library (ACL) build for
libarm_compute.soto link against PyTorch's libgomp (libgomp-947d5fa1.so.1.0.0): this fixes the issue when you run with eager mode (enforce_eager=True) - Make sure that inductor links against pytorch's libgomp: this is a deeper issue (see [cpu][aarch64] Dual libgomp runtimes with torch.compile pytorch/pytorch#166087). The right fix for this is in PyTorch. However, given the severity of the issue, we should hot-fix this in vllm. I suggest we
LD_PRELOADpytorch's libgomp in vllm/platforms/cpu.py in the case where arch is AArch64, OS is Linux and neither libgomp or libomp were preloaded by the uses. I tried this and it fixed the low utilization issue
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
cyb70289
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working