Skip to content

feat: add OOM pre-check for vision models and fix InternVL image dime…#1253

Open
sufubao wants to merge 3 commits intomainfrom
fix_mm_check
Open

feat: add OOM pre-check for vision models and fix InternVL image dime…#1253
sufubao wants to merge 3 commits intomainfrom
fix_mm_check

Conversation

@sufubao
Copy link
Copy Markdown
Collaborator

@sufubao sufubao commented Apr 2, 2026

…nsion handling

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an OOM (Out of Memory) pre-check mechanism for Qwen-family vision models by performing a dummy forward pass with worst-case image dimensions during initialization. It updates several model implementations to support this check and refactors the ViT model to derive inference parameters from the configuration instead of environment variables. Feedback focuses on improving memory management within the pre-check function by explicitly deleting tensors and clearing the CUDA cache, as well as refining exception handling and logging practices.

Comment thread lightllm/models/qwen2_vl/vision_process.py Outdated
Comment thread lightllm/models/qwen2_vl/vision_process.py Outdated
Comment thread lightllm/models/qwen2_vl/vision_process.py Outdated
sufubao and others added 2 commits April 3, 2026 12:08
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
sufubao added a commit that referenced this pull request Apr 16, 2026
Design spec for eliminating the multimodal OOM class that surfaced with
Qwen3.5-VL. Replaces PR #1253 in full: absorbs its Qwen stress helpers
(minus the empty_cache call that released the measured peak), adds the
min-max bug fix at visualserver/manager.py:87, tightens visual+audio
concurrency semaphores from x8 to x1, ports _check_decode_infer from
origin/qw35_stable, and re-shapes the LLM init into a two-pass
probe-measure-rebuild-validate auto-profile that eliminates --mem_fraction
as a tuning knob.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sufubao added a commit that referenced this pull request Apr 16, 2026
…ening

- fix min/max bug at visualserver/manager.py:87 — was silently capping
  per-DP visual batch size to 1 regardless of --visual_infer_batch_size
- tighten visual and audio runtime semaphores from *8 to *1 so runtime
  concurrency never exceeds the stress-tested peak
- add per-model _check_max_len_infer for Qwen2_VL, Qwen2_5_VL, Qwen3_VL,
  Qwen3_omni_moe (absorbed from PR #1253)
- qwen_vl_check_max_len_infer deliberately omits torch.cuda.empty_cache
  so the stress peak stays pinned in the caching allocator at the driver
  level for the rest of process lifetime — this is the reserve-then-yield
  contract that lets the LLM's later profile_size see peer reservations
- wire _check_max_len_infer call site into visual model_rpc.py::exposed_init_model
  with hasattr gate and warning log for uncovered model types
- absorb PR #1253's config-driven worst-case derivation for InternVL
- port _check_decode_infer helper from origin/qw35_stable into basemodel.py
  (not yet called from __init__ — Commit 2 wires it in)

Part of the multimodal OOM fix. See
docs/superpowers/specs/2026-04-16-multimodal-oom-fix-design.md for rationale.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sufubao added a commit that referenced this pull request Apr 16, 2026
…ening

- fix min/max bug at visualserver/manager.py:87 — was silently capping
  per-DP visual batch size to 1 regardless of --visual_infer_batch_size
- tighten visual and audio runtime semaphores from *8 to *1 so runtime
  concurrency never exceeds the stress-tested peak
- add per-model _check_max_len_infer for Qwen2_VL, Qwen2_5_VL, Qwen3_VL,
  Qwen3_omni_moe (absorbed from PR #1253)
- qwen_vl_check_max_len_infer deliberately omits torch.cuda.empty_cache
  so the stress peak stays pinned in the caching allocator at the driver
  level for the rest of process lifetime — this is the reserve-then-yield
  contract that lets the LLM's later profile_size see peer reservations
- wire _check_max_len_infer call site into visual model_rpc.py::exposed_init_model
  with hasattr gate and warning log for uncovered model types
- absorb PR #1253's config-driven worst-case derivation for InternVL
- port _check_decode_infer helper from origin/qw35_stable into basemodel.py
  (not yet called from __init__ — Commit 2 wires it in)

Part of the multimodal OOM fix. See
docs/superpowers/specs/2026-04-16-multimodal-oom-fix-design.md for rationale.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant