Fix: Token usage display for Open WebUI and other OpenAI-compatible proxies#9516
Fix: Token usage display for Open WebUI and other OpenAI-compatible proxies#9516roomote-v0[bot] wants to merge 1 commit intomainfrom
Conversation
- Add support for alternative field names used by various OpenAI-compatible proxies - Check for usage_metadata field in addition to usage field - Try multiple token field variations (prompt_tokens, input_tokens, promptTokens, etc.) - Add debug logging when tokens are present but not extracted correctly - Fixes issue where Open WebUI proxy responses show 0 tokens despite having valid usage data Fixes #9514
Review complete. Found 2 issues that should be addressed:
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
| } else if ((chunk as any).usage_metadata) { | ||
| // Some proxies use usage_metadata instead of usage | ||
| lastUsage = (chunk as any).usage_metadata | ||
| } |
There was a problem hiding this comment.
The usage_metadata fallback assumes this field has the same structure as the standard usage field, but there's no evidence from the issue description that Open WebUI uses this field name. The issue shows Open WebUI returns standard usage.prompt_tokens and usage.completion_tokens fields. If a proxy does use usage_metadata with a different structure, processUsageMetrics will still fail to extract tokens since it only checks for standard field names. This fallback should either be removed or documented with evidence of which proxies use it and what structure they return.
Fix it with Roo Code or mention @roomote and request a fix.
| @@ -218,15 +223,34 @@ export abstract class BaseOpenAiCompatibleProvider<ModelName extends string> | |||
| } | |||
|
|
|||
| protected processUsageMetrics(usage: any, modelInfo?: any): ApiStreamUsageChunk { | |||
There was a problem hiding this comment.
The PR adds support for alternative field names but doesn't include tests to verify this functionality works. Tests should cover: (1) extracting tokens from input_tokens/output_tokens field names, (2) extracting tokens from camelCase field names like promptTokens/completionTokens, (3) extracting cache tokens from alternative field locations, and (4) the debug logging when tokens aren't extracted despite usage data being present. Without tests, it's difficult to verify the fix actually resolves the Open WebUI issue and won't regress in the future.
Fix it with Roo Code or mention @roomote and request a fix.
Summary
This PR fixes issue #9514 where the context length indicator always displays 0 tokens when using Open WebUI as an OpenAI-compatible proxy.
Problem
When connecting the Roo Code extension to an OpenAI-compatible API through Open WebUI, the token usage always shows 0 despite the API responses correctly including
completion_tokens,prompt_tokens, andtotal_tokensfields.Solution
Enhanced the
BaseOpenAiCompatibleProviderclass to:usage_metadatafield in addition tousage)prompt_tokens/input_tokens/promptTokenscompletion_tokens/output_tokens/completionTokensTesting
Related Issue
Fixes #9514
Important
Fixes token usage display in
BaseOpenAiCompatibleProviderfor Open WebUI by supporting alternative field names and adding debug logging.BaseOpenAiCompatibleProviderfor Open WebUI and other proxies.usage_metadatafield as alternative tousagefor token data.prompt_tokens,input_tokens,promptTokens,completion_tokens,output_tokens,completionTokens.This description was created by
for 42b730a. You can customize this summary. It will automatically update as commits are pushed.