Skip to content

Track errors through the inference return path#3776

Merged
tdene merged 10 commits intoNVIDIA:mainfrom
tdene:tde/track_errors
Mar 18, 2026
Merged

Track errors through the inference return path#3776
tdene merged 10 commits intoNVIDIA:mainfrom
tdene:tde/track_errors

Conversation

@tdene
Copy link
Contributor

@tdene tdene commented Mar 10, 2026

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 10, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@tdene tdene marked this pull request as ready for review March 10, 2026 16:33
@tdene tdene requested review from a team as code owners March 10, 2026 16:33
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 10, 2026 16:33
@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 10, 2026
@tdene tdene added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 10, 2026
@Phlip79 Phlip79 removed the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 10, 2026
@tdene tdene requested a review from a team as a code owner March 10, 2026 19:43
@tdene tdene force-pushed the tde/track_errors branch from dfb2674 to 3b0e2ac Compare March 10, 2026 20:38
@tdene tdene added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 10, 2026
@tdene tdene removed the request for review from a team March 10, 2026 20:38
@tdene tdene added Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. and removed Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. labels Mar 11, 2026
Comment on lines +786 to +787
entry = self.requests[request_id]
request = entry.record[-1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be

request = self.requests[request_id]
entry = request.record[-1]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand what you mean.

But unfortunately, self.requests is a misnomer. self.requests is a Dict[int, RequestEntry], where each RequestEntry contains a DynamicInferenceRequestRecord, and each DynamicInferenceRequestRecord contains a list[DynamicInferenceRequest].

So if anything, we should be changing the name of self.requests to self.request_entries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, can you make it this then?

request_entry = self.requests[request_id]
request = request_entry.record[-1]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed!

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Mar 11, 2026
@tdene tdene removed the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 11, 2026
@tdene
Copy link
Contributor Author

tdene commented Mar 11, 2026

/claude test

@tdene
Copy link
Contributor Author

tdene commented Mar 12, 2026

/claude review

if failed_errors:
error_detail = "; ".join(failed_errors)
logger.error(f"Inference request(s) failed: {error_detail}")
return Response(f"Inference request(s) failed: {error_detail}", status=400)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: HTTP 400 indicates a client error ("bad request"), but inference failures can also be server-side (e.g. ERROR_TRANSIENT). Returning 400 for transient/server errors is misleading to clients — they may not retry when they should. Consider using 500 (or 503 for transient errors) instead, or at minimum differentiating based on the event type since you already distinguish ERROR_TRANSIENT vs ERROR_NONTRANSIENT.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Claude, thank you for your feedback. I have addressed this concern!

if failed_errors:
error_detail = "; ".join(failed_errors)
logger.error(f"Inference request(s) failed: {error_detail}")
return f"Inference request(s) failed: {error_detail}", 400
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same HTTP 400 concern as in chat_completions.py — transient/server-side errors should not be reported as 400.

Also minor inconsistency: chat_completions.py returns Response(msg, status=400) while this file returns a tuple (msg, 400). Both work in Flask/Quart, but it would be cleaner to use the same style (the existing exception handler on line 120 already uses the tuple form, so the tuple is fine here — just noting the cross-file inconsistency).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other thread.

@tdene
Copy link
Contributor Author

tdene commented Mar 16, 2026

/claude review

@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels Mar 18, 2026
@tdene tdene added this pull request to the merge queue Mar 18, 2026
@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23253541987

Merged via the queue into NVIDIA:main with commit ee00a70 Mar 18, 2026
56 of 64 checks passed
@tdene tdene deleted the tde/track_errors branch March 18, 2026 17:46
ko3n1g added a commit to ko3n1g/Megatron-LM that referenced this pull request Mar 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants