Skip to content

Add option to skip first token during logits comparison#2953

Merged
copybara-service[bot] merged 1 commit intomainfrom
agagik-logits
Feb 2, 2026
Merged

Add option to skip first token during logits comparison#2953
copybara-service[bot] merged 1 commit intomainfrom
agagik-logits

Conversation

@gagika
Copy link
Copy Markdown
Collaborator

@gagika gagika commented Jan 15, 2026

Description

Adds --skip_first_token flag to tests/forward_pass_logit_checker.py. This ignores the first token during logit comparison, preventing logit comparison failures due to high entropy at the first token which is sensitivity to numeric accuracy especially for MoE models with many experts.

Tests

Verified locally on CPU. The script successfully ignored the initial token mismatch and passed the test criteria for the subsequent tokens using the new flag.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 15, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@RissyRan
Copy link
Copy Markdown
Collaborator

Wondering if the 1st token doesn't match, will this affect following tokens?

@gagika
Copy link
Copy Markdown
Collaborator Author

gagika commented Jan 15, 2026

Wondering if the 1st token doesn't match, will this affect following tokens?

at first token entropy is quite high (a lot of plausible options), so large (and MoE) models can be sensitive to numerical accuracy issues (e.g. routing to different expert).

2nd, 3rd, and upcoming tokens are all conditioned on the previous tokens (including first), and have smaller entropy, (less sensitive to numerical issues).

Copy link
Copy Markdown
Collaborator

@shuningjin shuningjin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for identifying the noise in the initial token prediction and adding the option to skip! LGTM.

Comment thread tests/forward_pass_logit_checker.py
Comment thread tests/utils/forward_pass_logit_checker.py
@RissyRan
Copy link
Copy Markdown
Collaborator

Wondering if the 1st token doesn't match, will this affect following tokens?

at first token entropy is quite high (a lot of plausible options), so large (and MoE) models can be sensitive to numerical accuracy issues (e.g. routing to different expert).

2nd, 3rd, and upcoming tokens are all conditioned on the previous tokens (including first), and have smaller entropy, (less sensitive to numerical issues).

Thanks for the explanation! Potentially you have some data points for Kimi large model in logits verification. My mental model is: the first token is the most volatile due to high entropy. However, because each token depends on the previous one, that initial "tweak" usually causes the 2nd and 3rd tokens to diverge further from the original path, not settle back down. Right?

@RissyRan
Copy link
Copy Markdown
Collaborator

Wondering if the 1st token doesn't match, will this affect following tokens?

at first token entropy is quite high (a lot of plausible options), so large (and MoE) models can be sensitive to numerical accuracy issues (e.g. routing to different expert).
2nd, 3rd, and upcoming tokens are all conditioned on the previous tokens (including first), and have smaller entropy, (less sensitive to numerical issues).

Thanks for the explanation! Potentially you have some data points for Kimi large model in logits verification. My mental model is: the first token is the most volatile due to high entropy. However, because each token depends on the previous one, that initial "tweak" usually causes the 2nd and 3rd tokens to diverge further from the original path, not settle back down. Right?

Discussed offline with @gagika! It makes more sense now. This could be an issue for large MoE models (like Kimi-K2), potentially related with numerics, even the tokens IDs and weights are fixed.

@copybara-service copybara-service Bot merged commit 8cc3ba7 into main Feb 2, 2026
49 checks passed
@copybara-service copybara-service Bot deleted the agagik-logits branch February 2, 2026 18:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants