Skip to content

Add tools/prepare_cache.py for offline GPT dataset cache preparation#4080

Merged
asolergi-nv merged 19 commits intoNVIDIA:mainfrom
asolergi-nv:prepare_cache
Apr 29, 2026
Merged

Add tools/prepare_cache.py for offline GPT dataset cache preparation#4080
asolergi-nv merged 19 commits intoNVIDIA:mainfrom
asolergi-nv:prepare_cache

Conversation

@asolergi-nv
Copy link
Copy Markdown
Contributor

@asolergi-nv asolergi-nv commented Mar 31, 2026

What does this PR do ?

This PR adds a new offline cache-preparation entrypoint for GPTDataset-based training so dataset caches can be built ahead of time on a CPU-only node instead of forcing rank 0 to build them during training startup.

What this introduces

  • Adds tools/prepare_cache.py
    • Reuses Megatron’s normal argument parsing/validation path so the tool can be launched with the same dataset-related training args
    • Computes train/valid/test sample targets via get_train_valid_test_num_samples() so cache keys match training behavior
    • Builds caches through the same GPTDataset + BlendedMegatronDatasetBuilder path used by training
    • Supports both:
      • --data-path + --split
      • per-split dataset definitions (--train-data-path, --valid-data-path, --test-data-path, --per-split-data-args-path)
    • Adds optional --prepare-cache-world-size to let a single-node prep run match the future training topology when sample counts depend on world size / DP size
  • Entry point parity changes
    • Updates pretrain_mamba.py so its GPTDatasetConfig construction matches GPT-relevant fields already present in pretrain_gpt.py, specifically:
      • multiple_validation_sets
      • full_validation
    • Includes a small formatting cleanup in pretrain_gpt.py

Tests

Adds tests/unit_tests/data/test_prepare_cache.py covering:

  • cache preparation for blended datasets
  • cache preparation for per-split datasets
  • --prepare-cache-world-size normalization
  • unsupported-mode rejection
  • cache-hit rebuilds using:
    • dataloader_fast_cache_load=True
    • dataloader_defer_npy_index_mmap=True
  • verification that deferred/lazy-loaded cached datasets still return the same samples as the normal cache-hit path

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Mar 31, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft March 31, 2026 15:30
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@asolergi-nv asolergi-nv marked this pull request as ready for review March 31, 2026 17:49
@asolergi-nv
Copy link
Copy Markdown
Contributor Author

/ok to test d72afb6

@asolergi-nv
Copy link
Copy Markdown
Contributor Author

/ok to test c87feda

@asolergi-nv
Copy link
Copy Markdown
Contributor Author

/ok to test 2cd79f9

asolergi-nv and others added 5 commits April 15, 2026 16:47
BlendedDataset.__init__ called torch.distributed.get_rank() without
first checking torch.distributed.is_initialized(), which crashes when
running without distributed (e.g. tools/prepare_cache.py on a cold
cache with blended datasets). Add the same is_initialized() guard
already used in GPTDataset.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
cache_hit = False

if not path_to_cache or (not cache_hit and torch.distributed.get_rank() == 0):
if not path_to_cache or (
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need and torch.distributed.get_rank() == 0 when we're already using log_single_rank?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're overlooking the following lines of code. Could you expand them and take a look? I'm just adding the torch.distributed.is_initialized() check for protection.

Comment thread tools/prepare_cache.py
Copy link
Copy Markdown
Contributor

@dimapihtar dimapihtar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you!

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Apr 22, 2026
@svcnvidia-nemo-ci svcnvidia-nemo-ci removed the Final Review PR is in the "final review" stage label Apr 23, 2026
…ache

    - Call update_train_iters() in build_dataset_caches so jobs using
      --train-samples (without --train-iters) no longer trip the
      eval-samples assertion in get_train_valid_test_num_samples().
    - Reject --step-batch-size-schedule in _validate_prepare_cache_args;
      the tool does not initialize the microbatch calculator and cannot
      support its dynamic batch-size path.
    - Document the full unsupported-flags set (--mock-data, --sft,
      --fim-data, --step-batch-size-schedule) in the module docstring,
      docs/user-guide/data-loading.md, and megatron/core/datasets/readme.md.
    - Tests: add --train-samples-only coverage and extend the
      unsupported-modes parametrize with --step-batch-size-schedule.
@asolergi-nv
Copy link
Copy Markdown
Contributor Author

/ok to test c1641a7

@asolergi-nv asolergi-nv enabled auto-merge April 23, 2026 14:38
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Apr 23, 2026
@asolergi-nv
Copy link
Copy Markdown
Contributor Author

/ok to test ec3cb71

@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels Apr 29, 2026
@asolergi-nv asolergi-nv added this pull request to the merge queue Apr 29, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25124057142

Merged via the queue into NVIDIA:main with commit c5201a0 Apr 29, 2026
70 of 72 checks passed
@asolergi-nv asolergi-nv deleted the prepare_cache branch April 29, 2026 18:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants