fix(dtensor): clear error when train/get_logprobs/score run offloaded#2392
Open
lonexreb wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
Open
fix(dtensor): clear error when train/get_logprobs/score run offloaded#2392lonexreb wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
lonexreb wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
Conversation
…le offloaded (NVIDIA-NeMo#1141) Custom training loops sometimes skip ``prepare_for_training()`` / ``prepare_for_lp_inference()`` after ``offload_after_refit()`` has been called. The model is still on CPU, so the next ``train()``, ``get_logprobs()``, or ``score()`` call drives a CUDA op against CPU-resident weights and crashes with the opaque CUDA error: an illegal memory access was encountered There is no signal in that error pointing at the missing prepare step, so debugging it is painful — especially for users writing their first custom loop. Track the offload state on the worker and surface a clear ``RuntimeError`` at the entry of each GPU-bound public method that points the caller at the exact prepare step they skipped: train() was called while the policy weights are offloaded to CPU. This usually means a custom training loop forgot to call prepare_for_training() after offload_after_refit(). Call the appropriate prepare step before invoking train(); otherwise the underlying CUDA op will fail with "illegal memory access". See issue NVIDIA-NeMo#1141. Implementation: * Add ``self._weights_offloaded: bool`` (initialized False) to ``DTensorPolicyWorkerImpl``. * ``offload_after_refit()`` flips it to True. * ``prepare_for_training()`` and ``prepare_for_lp_inference()`` flip it back to False (and gain docstrings making the contract explicit). * New ``_assert_weights_on_device(method_name)`` helper raises the detailed ``RuntimeError`` when the flag is set. * ``train()``, ``get_logprobs()``, ``score()`` call the helper as the first line of their body. The flag is *only* about the explicit phase-level offload performed by ``offload_after_refit()``. The ``dtensor_cfg.cpu_offload`` mode (which keeps weights on CPU but transparently shuttles them per-layer) is unaffected — the flag stays False under that mode and existing behavior is preserved. Coverage: * New ``tests/unit/models/policy/test_dtensor_worker_offload_guard.py`` exercises the on-device pass-through, the train()-side error message, and the inference-side error messages (parametrized over get_logprobs / score). The tests build a bare worker via ``__new__`` and patch the flag, so they run at the L0 tier without needing Ray, distributed init, or a real model. Out of scope (intentionally): the same guard pattern would be useful in ``megatron_policy_worker.py`` and ``dtensor_policy_worker_v2.py``. Those workers have a slightly different offload state model and merit their own focused PRs. Refs: NVIDIA-NeMo#1141 Signed-off-by: lonexreb <reach2shubhankar@gmail.com>
5 tasks
lonexreb
added a commit
to lonexreb/RL
that referenced
this pull request
May 4, 2026
…VIDIA-NeMo#1141) Address review feedback on PRs NVIDIA-NeMo#2392/NVIDIA-NeMo#2393: the guard pattern was applied to ``train``, ``get_logprobs``, and ``score`` but missed two other GPU-bound public APIs that exhibit the same opaque-CUDA-crash failure mode if called while weights are offloaded: * ``get_topk_logits`` — exists on all three workers (dtensor v1/v2 and Megatron) and runs a forward pass; exposed via ``lm_policy.py:get_topk_logits``. * ``generate`` — Megatron-only public API for HuggingFace-framework generation; exposed via ``lm_policy.py:generate``. Add ``self._assert_weights_on_device(...)`` as the first line of each. The hint resolves to ``prepare_for_lp_inference()`` for all four inference-side methods (``get_logprobs``, ``score``, ``get_topk_logits``, ``generate``) — same as before — since users of these APIs typically run them after refit / offload. Tests in ``test_dtensor_worker_offload_guard.py`` and ``test_worker_offload_guard_v2_mcore.py`` are extended to parametrize over the new method names so the inference-side hint is locked in end-to-end. Refs: NVIDIA-NeMo#1141 Signed-off-by: lonexreb <reach2shubhankar@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Closes #1141 (dtensor path; mcore + dtensor-v2 are tracked as out-of-scope below).
Custom training loops sometimes skip `prepare_for_training()` /
`prepare_for_lp_inference()` after `offload_after_refit()` has been
called. The model is still on CPU, so the next `train()`,
`get_logprobs()`, or `score()` call drives a CUDA op against
CPU-resident weights and crashes with the opaque
```
CUDA error: an illegal memory access was encountered
```
There is no signal in that error pointing at the missing prepare step,
so debugging it is painful — especially for users writing their first
custom loop.
Behavioural change
Track the offload state on the worker and surface a clear
`RuntimeError` at the entry of each GPU-bound public method:
```
RuntimeError: train() was called while the policy weights are offloaded
to CPU. This usually means a custom training loop forgot to call
prepare_for_training() after offload_after_refit(). Call the
appropriate prepare step before invoking train(); otherwise the
underlying CUDA op will fail with "illegal memory access". See
issue #1141.
```
The same shape of message is emitted for `get_logprobs()` and
`score()`, with the prepare step in the hint adjusted accordingly
(`prepare_for_lp_inference()` for both).
Implementation
`DTensorPolicyWorkerImpl`.
back to `False` (and pick up docstrings that document the contract).
detailed `RuntimeError` when the flag is set.
first line of their body.
The flag is only about the explicit phase-level offload performed
by `offload_after_refit()`. The `dtensor_cfg.cpu_offload` mode
(which keeps weights on CPU but transparently shuttles them per-layer)
is unaffected — the flag stays `False` under that mode and the
existing behaviour is preserved exactly.
Coverage
New `tests/unit/models/policy/test_dtensor_worker_offload_guard.py`:
guard is a no-op for all three callers when weights are on GPU.
validates the message names `train()`,
`prepare_for_training()`, the underlying CUDA failure mode, and
the issue number.
— parametrized over `get_logprobs` / `score`, validates the
inference-side message points at `prepare_for_lp_inference()`.
The tests build a bare worker via `new` and patch the flag, so
they run at the L0 tier without Ray, distributed init, or a real
model.
Out of scope
The same guard pattern would be useful in
`megatron_policy_worker.py` and `dtensor_policy_worker_v2.py`.
Those workers have a slightly different offload state model
(MegatronCore's own offload semantics, FSDP2 vs FSDP1, etc.) and
merit their own focused PRs. Documentation of the contract via
docstrings on the prepare/offload methods covers the most common path
in the meantime.
Test plan
Refs: #1141