-
Notifications
You must be signed in to change notification settings - Fork 565
Description
Hi,
I'm trying to build from source on an NVIDIA GB200 chip (aarch64, SBSA). I followed the steps in the README and related issues but the process consistently gets killed after ~20 minutes due to OOM. Here's a summary of my setup and the steps I'm using:
System Setup:
- CUDA: 12.8
- cuDNN: 9.8
- TransformerEngine: release_v2.3
I don't have sudo access on the system and no reliable Docker support so I can't adjust the swap space as one of the issues suggested.
I am running the following snippet:
git clone --branch release_v2.3 --recursive https://github.com/NVIDIA/TransformerEngine.git transformer_engine
cd transformer_engine
git submodule update --init --recursive
MAX_JOBS=1 \
NVTE_BUILD_THREADS_PER_JOB=1 \
NVTE_FRAMEWORK=pytorch \
python3 setup.py bdist_wheel --dist-dir=$HOME/transformer_engine/wheels
pip3 install --no-cache-dir --verbose transformer_engine/wheels/transformer_engine*.whl
The build is running out of memory even with MAX_JOBS=1 and NVTE_BUILD_THREADS_PER_JOB=1. It often fails around step 36/45 while building one of the transpose files, though the failure point varies. What I noticed is that memory usage just continues to increase during the step where the build gets stuck until the process issues a kill signal. The output is similar to what you can see here: NVIDIA-NeMo/NeMo#10131.
I've also tried setting CMAKE_BUILD_PARALLEL_LEVEL=1 as mentioned here but with no success.
I am aware of flash-attention-related memory issues, but setting the recommended environment variables (from other issues) did not help even though the GB200 should have sufficient memory.
Is there any additional configuration or workaround that would allow me to build successfully in this environment?
Any suggestions or guidance would be greatly appreciated. Let me know if I you need additional logs or trace.
Thanks!