Skip to content

Uneven GPU memory usage and CUDA OOM during multi-GPU inference with Qwen3-VL-30B-A3B-Thinking #41570

@Xqle

Description

@Xqle

When performing inference on a ~3-minute video using the Qwen3-VL-30B-A3B-Thinking model from transformers in 8*3090(24 GB) and setting fps=1, a CUDA Out-of-Memory (OOM) error occurs. 

  Traceback (most recent call last):
      File "/data01/xuqile/code/Inference/transformers/infer_qwen3vl_transformers.py", line 94, in <module>
        main()
      File "/data01/xuqile/code/Inference/transformers/infer_qwen3vl_transformers.py", line 78, in main
        generated_ids = model.generate(**inputs, max_new_tokens=args.max_tokens)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
        return func(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/generation/utils.py", line 2564, in generate
        result = decoding_method(
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/generation/utils.py", line 2784, in _sample
        outputs = self(**model_inputs, return_dict=True)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
        return forward_call(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/accelerate/hooks.py", line 175, in new_forward
        output = module._old_forward(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/utils/generic.py", line 1064, in wrapper
        outputs = func(self, *args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/models/qwen3_vl_moe/modeling_qwen3_vl_moe.py", line 1601, in forward
        outputs = self.model(
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
        return forward_call(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/utils/generic.py", line 1064, in wrapper
        outputs = func(self, *args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/models/qwen3_vl_moe/modeling_qwen3_vl_moe.py", line 1389, in forward
        outputs = self.language_model(
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
        return forward_call(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/utils/generic.py", line 1064, in wrapper
        outputs = func(self, *args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/models/qwen3_vl_moe/modeling_qwen3_vl_moe.py", line 962, in forward
        layer_outputs = decoder_layer(
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/modeling_layers.py", line 94, in __call__
        return super().__call__(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
        return forward_call(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/accelerate/hooks.py", line 175, in new_forward
        output = module._old_forward(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
        return func(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/models/qwen3_vl_moe/modeling_qwen3_vl_moe.py", line 391, in forward
        hidden_states = self.mlp(hidden_states)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
        return forward_call(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/accelerate/hooks.py", line 175, in new_forward
        output = module._old_forward(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/models/qwen3_vl_moe/modeling_qwen3_vl_moe.py", line 151, in forward
        routed_out = self.experts(hidden_states, router_weights, router_indices)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
        return forward_call(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/accelerate/hooks.py", line 175, in new_forward
        output = module._old_forward(*args, **kwargs)
      File "/data01/xuqile/miniforge3/envs/qwen3vl/lib/python3.10/site-packages/transformers/models/qwen3_vl_moe/modeling_qwen3_vl_moe.py", line 120, in forward
        next_states = torch.bmm((up * self.act_fn(gate)), self.down_proj)
      torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.06 GiB. GPU 0 has a total capacity of 23.68 GiB of which 4.40 GiB is free. Including non-PyTorch memory, this process has 19.27 GiB memory in use. Of the allocated memory 18.67 GiB is allocated by PyTorch, and 304.64 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

However, GPU monitoring shows that only one GPU has extremely high memory usage, while the others remain underutilized (less than 50% memory used).

Image

After investigation, the memory surge appears to happen inside Qwen3VLMoeTextExperts.forward

Image

Is there a known workaround for this, or will future releases include optimization or DTensor support for the MoE expert modules to improve memory distribution during multi-GPU inference?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions