Skip to content

Refactor CUDA graph API: decompose cuda_graph_scope into full_iteration impl, inference scope, and per-layer capture modules#4292

Open
buptzyb wants to merge 2 commits into
NVIDIA:mainfrom
buptzyb:main-strict-refactor
Open

Refactor CUDA graph API: decompose cuda_graph_scope into full_iteration impl, inference scope, and per-layer capture modules#4292
buptzyb wants to merge 2 commits into
NVIDIA:mainfrom
buptzyb:main-strict-refactor

Conversation

@buptzyb
Copy link
Copy Markdown
Contributor

@buptzyb buptzyb commented Apr 14, 2026

What does this PR do ?

Dev PR #4293

This PR decomposes the overloaded cuda_graph_scope field into three dedicated, semantically distinct concepts and cleans up naming throughout.

Problem

The old API overloaded --cuda-graph-scope with three unrelated concerns:

  • --cuda-graph-scope full_iteration → full-iteration training graphs (a capture strategy, not a module)
  • --cuda-graph-scope full_iteration_inference → block-owned inference graphs (inference ownership, not a module)
  • --cuda-graph-scope attn mlp → per-layer capture regions (the actual intended use)

CudaGraphScope mixed iteration-level control flow with per-layer module selection. The three concepts have nothing in common and cannot be meaningfully combined in one field.

Solution

Four concrete changes:

  1. full_iteration becomes its own cuda_graph_impl value.
    --cuda-graph-impl full_iteration replaces --cuda-graph-impl local --cuda-graph-scope full_iteration.

  2. --inference-cuda-graph-scope is a new dedicated field.
    InferenceCudaGraphScope (none / layer / block) replaces full_iteration_inference in cuda_graph_scope. The default for --cuda-graph-impl local is layer (preserving prior behaviour).

  3. CudaGraphScope is renamed to CudaGraphModule; cuda_graph_scope to cuda_graph_modules.
    With the two non-module values removed, the enum and field names now accurately reflect their purpose: selecting which per-layer modules to capture.

  4. Normalization is centralized in cuda_graph_config.py.
    normalize_cuda_graph_modules, normalize_inference_cuda_graph_scope, and validate_deprecated_cuda_graph_modules_migration_inputs are shared between TransformerConfig.__post_init__ and validate_args, removing duplication and making the migration logic testable in isolation.

Backward compatibility

All deprecated inputs are still accepted and silently migrated at startup:

Old input New equivalent
--enable-cuda-graph --cuda-graph-impl local
--external-cuda-graph --cuda-graph-impl transformer_engine
--cuda-graph-scope full_iteration --cuda-graph-impl full_iteration
--cuda-graph-scope full_iteration_inference --cuda-graph-impl local --inference-cuda-graph-scope block
--cuda-graph-scope attn mlp --cuda-graph-modules attn mlp
from megatron.core.transformer.enums import CudaGraphScope CudaGraphScope alias kept; use CudaGraphModule
TransformerConfig(cuda_graph_scope=...) cuda_graph_scope field kept, deprecated; use cuda_graph_modules

Conflicting combinations (e.g. passing both a deprecated value and the new flag with an incompatible value) are rejected with a clear assertion.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 14, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@buptzyb buptzyb force-pushed the main-strict-refactor branch 2 times, most recently from f36afad to 0f07b30 Compare April 23, 2026 14:55
@buptzyb buptzyb self-assigned this Apr 23, 2026
@buptzyb buptzyb changed the title Refactor CUDA graph configuration: separate inference granularity from training scope Refactor CUDA graph configuration: rename cuda_graph_scope to cuda_graph_modules and add backward compat Apr 23, 2026
@buptzyb buptzyb changed the title Refactor CUDA graph configuration: rename cuda_graph_scope to cuda_graph_modules and add backward compat Refactor CUDA graph API: decompose cuda_graph_scope into full_iteration impl, inference scope, and per-layer capture modules Apr 23, 2026
@buptzyb buptzyb marked this pull request as ready for review April 23, 2026 15:11
@buptzyb buptzyb requested review from a team as code owners April 23, 2026 15:11
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 23, 2026 15:11
@buptzyb buptzyb force-pushed the main-strict-refactor branch from 0f07b30 to 113c46a Compare April 23, 2026 15:18
@buptzyb buptzyb force-pushed the main-strict-refactor branch 5 times, most recently from 614115f to 9461d5a Compare April 26, 2026 11:02
@buptzyb
Copy link
Copy Markdown
Contributor Author

buptzyb commented Apr 29, 2026

/ok to test 542a36d

@buptzyb buptzyb added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Apr 29, 2026
@mathemakitten
Copy link
Copy Markdown
Contributor

Can we please make sure that the existing transition_cudagraph_scope function which is used to transition between full-layer and partial cudagraphs reflects these changes?

@buptzyb
Copy link
Copy Markdown
Contributor Author

buptzyb commented Apr 30, 2026

Can we please make sure that the existing transition_cudagraph_scope function which is used to transition between full-layer and partial cudagraphs reflects these changes?

@mathemakitten I looked at transition_cudagraph_scope, and if I understand it correctly, there is nothing need to change inside the function. The existing transition_cudagraph_scope helper is not using the old cuda_graph_scope API to select graph modules; it only switches the MoE layer runtime state between partial and full modes. The actual API transition to cuda_graph_modules is already handled in create_mcore_cudagraph_manager. Right?

@mathemakitten
Copy link
Copy Markdown
Contributor

Can we please make sure that the existing transition_cudagraph_scope function which is used to transition between full-layer and partial cudagraphs reflects these changes?

@mathemakitten I looked at transition_cudagraph_scope, and if I understand it correctly, there is nothing need to change inside the function. The existing transition_cudagraph_scope helper is not using the old cuda_graph_scope API to select graph modules; it only switches the MoE layer runtime state between partial and full modes. The actual API transition to cuda_graph_modules is already handled in create_mcore_cudagraph_manager. Right?

Ah you're right! Thanks for checking.

Copy link
Copy Markdown
Contributor

@mathemakitten mathemakitten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the thorough refactor! I've tried my best to catch the outstanding inconsistencies in the docs, mostly related to inference.

Comment thread docs/user-guide/features/cuda_graph.md Outdated
Comment thread docs/user-guide/features/cuda_graph.md Outdated
Comment thread docs/user-guide/features/cuda_graph.md Outdated
Comment thread docs/user-guide/features/cuda_graph.md Outdated
Comment thread docs/user-guide/features/cuda_graph.md Outdated
Comment thread megatron/core/transformer/enums.py Outdated
Comment thread megatron/core/transformer/transformer_config.py Outdated
Comment thread megatron/core/transformer/transformer_config.py Outdated
Comment thread megatron/inference/utils.py Outdated
num_cuda_graphs=(
args.inference_dynamic_batching_num_cuda_graphs
if args.cuda_graph_impl == "local"
if args.cuda_graph_impl in ("local", "full_iteration")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to keep the full_iteration impl + dynamic inference path here? We only want to setup inference graphs if inference_cuda_graph_scope is used, right?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, in this way the old and new behavior won't align in the full_iter path. But I'm good if you think this is safe.

I modified it to if args.inference_cuda_graph_scope != InferenceCudaGraphScope.none here. Also in DynamicInferenceEngine.create_cuda_graphs().

"\n\n*** WARNING: 'full_iteration' CUDA graph scope used during inference! "
"This will not create inference CUDA graphs. Use '--cuda-graph-scope=full_iteration_inference' instead. ***\n"
"\n\n*** WARNING: '--cuda-graph-impl=full_iteration' used during inference! "
"For compatibility, this preserves the legacy '--cuda-graph-modules=full_iteration' "
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"For compatibility, this preserves the legacy '--cuda-graph-modules=full_iteration' "
"For compatibility, this preserves the legacy '--cuda-graph-scope=full_iteration' "

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following the above idea, I directly judge inference_cuda_graph_scope != InferenceCudaGraphScope.none and cuda_graph_impl == "local" here, so this warning message is no longer needed.

@buptzyb
Copy link
Copy Markdown
Contributor Author

buptzyb commented May 12, 2026

/ok to test 86f86b0

}


class NormalizedCudaGraphModules(NamedTuple):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need a class here? This is only ever returned upon normalize_cuda_graph_modules right? Do we actually take advantage of the NamedTuple functionality?

)


def get_deprecated_cuda_graph_modules_migration(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this function is only called once and is so short, does it really need a whole function?

return NormalizedCudaGraphModules(normalized_scopes, deprecated_scopes, False)


def normalize_inference_cuda_graph_scope(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here, why does this need to be put into a function? Does carving this out into a function really improve readability?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

complexity: high Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. Run functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants