Add tools/prepare_cache.py for offline GPT dataset cache preparation#4080
Add tools/prepare_cache.py for offline GPT dataset cache preparation#4080asolergi-nv merged 19 commits intoNVIDIA:mainfrom
Conversation
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
|
/ok to test d72afb6 |
|
/ok to test c87feda |
|
/ok to test 2cd79f9 |
BlendedDataset.__init__ called torch.distributed.get_rank() without first checking torch.distributed.is_initialized(), which crashes when running without distributed (e.g. tools/prepare_cache.py on a cold cache with blended datasets). Add the same is_initialized() guard already used in GPTDataset. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| cache_hit = False | ||
|
|
||
| if not path_to_cache or (not cache_hit and torch.distributed.get_rank() == 0): | ||
| if not path_to_cache or ( |
There was a problem hiding this comment.
Why do we need and torch.distributed.get_rank() == 0 when we're already using log_single_rank?
There was a problem hiding this comment.
I think you're overlooking the following lines of code. Could you expand them and take a look? I'm just adding the torch.distributed.is_initialized() check for protection.
…_cache # Conflicts: # pretrain_mamba.py
…ache
- Call update_train_iters() in build_dataset_caches so jobs using
--train-samples (without --train-iters) no longer trip the
eval-samples assertion in get_train_valid_test_num_samples().
- Reject --step-batch-size-schedule in _validate_prepare_cache_args;
the tool does not initialize the microbatch calculator and cannot
support its dynamic batch-size path.
- Document the full unsupported-flags set (--mock-data, --sft,
--fim-data, --step-batch-size-schedule) in the module docstring,
docs/user-guide/data-loading.md, and megatron/core/datasets/readme.md.
- Tests: add --train-samples-only coverage and extend the
unsupported-modes parametrize with --step-batch-size-schedule.
|
/ok to test c1641a7 |
|
/ok to test ec3cb71 |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25124057142 |
What does this PR do ?
This PR adds a new offline cache-preparation entrypoint for GPTDataset-based training so dataset caches can be built ahead of time on a CPU-only node instead of forcing rank 0 to build them during training startup.
What this introduces
tools/prepare_cache.pyTests
Adds tests/unit_tests/data/test_prepare_cache.py covering:
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.