Skip to content

Conversation

@alugowski
Copy link
Contributor

@alugowski alugowski commented Mar 24, 2025

The Prometheus spec decode counters (draft/accepted/emitted token counts) are incremented by the values in spec_decode_metrics. However, those values are aggregates since startup. Therefore, the Prometheus counters are effectively a sum-of-sums instead of just a sum.

If a high-traffic vLLM is left on for a few hours those counters start to suggest absurdly high values like a TPS in the tens of millions.

The values in spec_decode_metrics are used by other stat reporters such as the command-line logger, so those cannot be converted to only store deltas. Instead this PR modifies PrometheusStatLogger to compute the deltas itself and properly increment the Prometheus Counter metrics.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

The Prometheus spec decode counters (draft/accepted/emitted token counts) are incremented by the values in spec_decode_metrics. However, those values are aggregates since startup. Therefore, the Prometheus counters are effectively a sum-of-sums instead of just a sum.

If a high-traffic vLLM is left on for a few hours those counters start to suggest absurdly high values like a TPS in the tens of millions.

Signed-off-by: Adam Lugowski <adam.lugowski@parasail.io>
Signed-off-by: Adam Lugowski <adam.lugowski@parasail.io>
Copy link
Member

@markmc markmc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I'm personally focused on V1 so I'm not super familiar with V0, but it does appear from the code that these values are cumulative

I've made some pretty minor suggestions for tweaking your change

Thanks!

self.last_local_log = time.time()
self.local_interval = local_interval
self.spec_decode_metrics: Optional[SpecDecodeWorkerMetrics] = None
self.last_spec_decode_metrics: Optional[SpecDecodeWorkerMetrics] = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This state could be on PrometheusStatLogger since it's not used by LoggingStatLogger

if self.spec_decode_metrics is not None:
# The counters in self.spec_decode_metrics are aggregates.
# The Prometheus Counters must be incremented with deltas.
# Keep track of the previous value so we can compute deltas.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be inclined to do something like this:

def log_counter_from_cumulative(metric, cumulative, previous):
    self._log_counter(metric, cumulative - previous)

The method name helps the code to be self-documenting, requiring less comments

and then:

    self._log_from_cumulative(
        self.metrics.counter_spec_decode_num_accepted_tokens,
        self.spec_decode_metrics.accepted_tokens,
	self.spec_decode_prev_num_accepted)
    self.spec_decode_prev_num_accepted = self.spec_decode_metrics.accepted_tokens

since we don't need all of SpecDecodeWorkerMetrics

@github-actions
Copy link

github-actions bot commented Aug 8, 2025

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label Aug 8, 2025
@mergify
Copy link

mergify bot commented Aug 8, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @alugowski.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Aug 8, 2025
@github-actions github-actions bot added unstale Recieved activity after being labelled stale and removed stale Over 90 days of inactivity labels Aug 10, 2025
@github-actions
Copy link

github-actions bot commented Nov 9, 2025

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added stale Over 90 days of inactivity and removed unstale Recieved activity after being labelled stale labels Nov 9, 2025
@markmc
Copy link
Member

markmc commented Nov 17, 2025

The affected v0 code has since been removed

@markmc markmc closed this Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants