Skip to content

Update NIXL Dockerfile + vLLM patch with variable TP#9

Merged
nnshah1 merged 9 commits intomainfrom
ptarasiewicz/nixl-update-dockerfile
Mar 4, 2025
Merged

Update NIXL Dockerfile + vLLM patch with variable TP#9
nnshah1 merged 9 commits intomainfrom
ptarasiewicz/nixl-update-dockerfile

Conversation

@ptarasiewiczNV
Copy link
Copy Markdown
Contributor

@ptarasiewiczNV ptarasiewiczNV commented Mar 4, 2025

What does the PR do?

Update NIXL dockerfile to work with NIXL TOT. Also few minor fixes to vllm engine args and to run vllm from checkpoint and not HF hub.

Update vLLM patch with variable TP changes.

Checklist

  • PR title reflects the change and is of format <commit_type>: <Title>
  • Changes are described in the pull request.
  • Related issues are referenced.
  • Populated github labels field
  • Added test plan and verified test passes.
  • Verified that the PR passes existing CI.
  • Verified copyright is correct on all changed files.
  • Added succinct git squash message before merging ref.
  • All template sections are filled out.
  • Optional: Additional screenshots for behavior/output changes with before/after.

Commit Type:

Check the conventional commit type
box here and add the label to the github PR.

  • build
  • ci
  • docs
  • feat
  • fix
  • perf
  • refactor
  • revert
  • style
  • test

Related PRs:

Where should the reviewer start?

Test plan:

  • CI Pipeline ID:

Caveats:

Background

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

@ptarasiewiczNV ptarasiewiczNV changed the title Update NIXL Dockerfile Update NIXL Dockerfile + vLLM patch with variable TP Mar 4, 2025
Copy link
Copy Markdown
Contributor

@ishandhanani ishandhanani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚢

@ishandhanani
Copy link
Copy Markdown
Contributor

Lets get these changes in and follow up with discovery via ETCD

@piotrm-nvidia
Copy link
Copy Markdown
Contributor

piotrm-nvidia commented Mar 4, 2025

The executed pipeline doesn't build new container so if all tests will pass, it doesn't mean that anything works.

@saturley-hall
Copy link
Copy Markdown
Member

The executed pipeline doesn't build new container so if all tests will pass, it doesn't mean that anything works.

Do we need to add an additional framework target and/or specific testing step here?

@nnshah1
Copy link
Copy Markdown
Contributor

nnshah1 commented Mar 4, 2025

The executed pipeline doesn't build new container so if all tests will pass, it doesn't mean that anything works.

Do we need to add an additional framework target and/or specific testing step here?

we don't currently have tests for this - but WIP

@nnshah1 nnshah1 merged commit 6d6a551 into main Mar 4, 2025
@nnshah1 nnshah1 deleted the ptarasiewicz/nixl-update-dockerfile branch March 4, 2025 18:37
kylehh pushed a commit to kylehh/dynamo that referenced this pull request Apr 11, 2025
Co-authored-by: Piotr Tarasiewicz Nvidia <ptarasiewicznv@Piotrs-MacBook-Pro.local>
ranrubin added a commit that referenced this pull request Apr 20, 2026
Fixes all actionable items from the second review:

Bug fixes:
- #1: Change returncode=4 → returncode=2 in pytest_configure exit
  (4 is reserved by pytest for EXIT_NOTESTSCOLLECTED)
- #2: Add comment clarifying HF_HUB_OFFLINE double-clear is safe
  (already in _MODELS_DIR_ENV_KEYS; loop correctly restores original)

Test quality:
- #7: Add missing assertions to test_apply_hf_home_layout
  (HF_HUB_OFFLINE, TRANSFORMERS_OFFLINE, DYNAMO_MODELS_DIR, TRANSFORMERS_CACHE)
- #8: Use monkeypatch in tests 3 & 4 for proper env isolation
  (prevents pre-existing env vars from leaking on test failure)

Design / correctness:
- #3: Fix _models_dir_env docstring ("exactly once" → "once per worker")
- #4: Add comment noting TRANSFORMERS_CACHE deprecation
- #5: Update --models-dir help text and docs to reflect both supported
  layouts (bare HF_HUB_CACHE and HF_HOME), not just bare
- #10: Restore pytest.skip() in download_lora() (test-only infra);
  remove now-redundant guard from minio_lora_service fixture
- #11: Raise hub/ detection log to WARNING with guidance
- #12: Replace shutil.rmtree(ignore_errors=True) with try/except
  so cleanup failures are logged rather than silently swallowed

Not addressed: #6 (keep gpu_0 per project marker policy), #9 (pytester
test deferred — complex due to conftest dependencies, low severity)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

Signed-off-by: rrubin <rrubin@nvidia.com>
ranrubin added a commit that referenced this pull request Apr 20, 2026
Bug fixes:
- #1: Add monkeypatch env isolation to test_apply_sets_writable_transformers_cache
- #2: Add TRANSFORMERS_CACHE assertion to test_apply_bare_cache_layout
  (bare-layout path was missing the writable-dir check present in HF_HOME test)

Minor cleanups:
- #4: Move `import pytest` from inside download_lora() to module top-level
  (lora_utils.py is test-only infra; pytest is always available)
- #5: Replace pytestconfig.getoption("--models-dir") re-checks in
  predownload_models/predownload_tokenizers with os.environ.get("DYNAMO_MODELS_DIR")
  (_models_dir_env runs first and sets the var; single source of truth)

New coverage tests:
- #7: test_models_dir_nonexistent_exits_with_code_2 — subprocess test
  verifying pytest_configure exits with returncode=2 on bad path
- #8: test_download_lora_skips_in_models_dir_mode — verifies download_lora()
  raises pytest.skip.Exception when DYNAMO_MODELS_DIR is set

Not addressed: #3 (keep gpu_0 per project guidelines and previous review
retraction), #6 (hook ordering is guaranteed), #9 (complex, low priority)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

Signed-off-by: rrubin <rrubin@nvidia.com>
biswapanda added a commit that referenced this pull request May 8, 2026
Quick-win review fixes from PR #9131. Heavy-lift items (#9
prompt_token_ids env-gate, #11 update_weights atomicity, #13
per-choice completion_token_ids) tracked separately as follow-ups.

handlers.py
  - Catch EngineDeadError before the generic except in all 8 RL handlers
    (pause/resume/liveness_probe/get_state/flush_cache/update_weights_from_path/
    load_lora_adapter/unload_lora_adapter): match the existing shutdown
    pattern in this file so admin calls also surface engine death instead
    of leaving a broken worker alive.
  - get_state: fall back to a no-op collective_rpc when check_health is
    absent — same fallback liveness_probe already uses, otherwise older
    engines without check_health always look alive.
  - load_lora_adapter hot-swap path: a remove_lora() failure now returns
    a 400-style error response (was: silent log warn + continue, leaving
    add_lora to no-op against the still-registered ID); a
    reset_prefix_cache() failure after add_lora succeeds also returns
    error (was: log error and continue, leaving stale KV from the old
    adapter routable).
  - unload_lora_adapter: an unregister_model() failure after engine
    remove_lora succeeds now returns error (was: log warn and report
    success, leaving model=<lora_name> still routed to this worker even
    though _resolve_lora_request would now fall back to the base model).

container/deps/vllm/install_vllm.sh
  - Pin prime-rl install to an immutable commit SHA
    (d49f3939e7dca29bceb9ed515cc1782497b67e81 ↔ tag v0.5.1.dev101) so a
    re-pointed tag upstream can't change what we ship. PRIME_RL_REF kept
    in build logs for human readability; PRIME_RL_COMMIT is the
    authoritative pin.
  - Replace `echo "\n=== ..."` with `printf '\n=== ...\n'` (shellcheck SC2028).

lib/llm/src/http/service/openai.rs
  - Force `request.inner.logprobs = Some(true)` unconditionally in both
    RL token-id promotion blocks (was: only when None). RL extraction of
    completion_token_ids depends on logprobs being on at the engine; an
    explicit logprobs=false would otherwise silently drop them.
  - Bound `/v1/rl/ready` per-worker probes with a 5s timeout (override
    via DYN_RL_LIVENESS_TIMEOUT_MS). Was reusing the shared 600s
    http_client, so one wedged worker could block readiness for 10
    minutes instead of failing fast as 503.
  - Tokenize Chat handler: call `request.validate()?` before
    `merged_chat_template_kwargs()` so the
    continue_final_message + add_generation_prompt mutual-exclusion
    constraint is enforced (validate() existed but was never invoked).

lib/llm/src/protocols/openai/chat_completions.rs
  - Update stale doc comments on the legacy `tokens` and
    `return_token_ids` fields: they pointed callers at the now-404
    `/v1/chat/completions/tokens` URI. Direct callers to the canonical
    top-level `prompt_token_ids` extension and `nvext.extra_fields`
    instead.

cargo check -p dynamo-llm: clean (1 pre-existing benign warning).
cargo test -p dynamo-llm --test test_common_ext: 15 passed.
MatejKosec added a commit that referenced this pull request May 10, 2026
…1 + 888

Sibling read-side fix to Bug B (cd20043, which closed the write-side
race in `coordinator.release()` via atomic remove). Two read-side
lookups in `commit_usaa1` could observe the wrapper-side
`cd_request_state` or coordinator-side `coordinator.states` going empty
between the `is_active` check at the top of `decode_usaa` (line 614)
and the actual lookup, under `kv_load_failure_policy=recompute`.

Run #9 cascade (decode.log on workstation 10.57.202.146):
  decode_commits_closed_short=7
  Recovered from KV load failure=8
  CD USAA-1 fatal=3
  EngineDeadError=1

Two sites fixed:

Site B — line 761 (`cd_request_state.get(rid)`): converts the bail to
a soft-skip with `crate::audit!("commit_usaa1_state_gone", ...)`. If
the wrapper state has been cleaned up between is_active and now, the
cleanup chain has already run — there's nothing for us to do, and
bailing would propagate worker-fatal to vLLM's EngineCore for a race
vLLM already owns recovery for.

Site A — line 888 (`coordinator.session_for(rid)`): replaces the late
coordinator lookup with `updated.session.lock().clone()` — read from
the held `Arc<CdRequestState>` whose `session` field was set
synchronously inside `commit_gnmt_remote`'s Ok arm. Carrying the
`Arc<dyn Session>` on the wrapper's per-request state means the
spawn site uses the same session that was opened for this request;
even if cleanup concurrently removes from `coordinator.states`, the
held Arc keeps the session alive. The None branch is unreachable on
documented orderings, but routes to `cleanup_failed_request` (the
documented async-failure path) rather than bail! so an unusual
ordering surfaces as a graceful per-request failure to vLLM, not a
worker-fatal up the EngineCore stack.

Matches Graham's Rule 13 (graceful spawned-task error handling) and
the `feedback-bail-vs-softskip-callbacks` memory's Tier-2 ranking.

Signed-off-by: Matej Kosec <mkosec@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants