Agents/Usage: estimate Ollama usage for the dashboard#39112
Agents/Usage: estimate Ollama usage for the dashboard#39112TylonHH wants to merge 4 commits intoopenclaw:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ee425487bd
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Greptile SummaryThis PR adds a characters-based token-count estimate for native Ollama responses that omit Changes:
Notable concern:
Confidence Score: 4/5
Last reviewed commit: ee42548 |
0bf0edb to
fcea3e4
Compare
fcea3e4 to
73a8c2c
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 74142cb653
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
This pull request has been automatically marked as stale due to inactivity. |
|
Codex automated review: keeping this open. Keep PR #39112 open. Current main still zero-fills omitted native Ollama usage counters, the dashboard/session usage path consumes persisted transcript usage rather than estimating later, and the PR’s underlying fix remains useful even though its patch targets the old Best possible solution: Keep this PR open or replace it with an equivalent current-main fix in What I checked:
Remaining risk / open question:
Codex Review notes: model gpt-5.5, reasoning high; reviewed against 406ae72fd278. |
|
Thanks for this. I ported the fix to the current Ollama plugin runtime on The original patch targeted the removed
Verification:
Closing this PR as carried forward by current |
Summary
/api/chatchunk omitsprompt_eval_count/eval_countWhy
The existing usage dashboard already reads assistant
message.usagefrom session transcripts. Native Ollama sometimes omits its eval counters, which leaves local runs at zero usage and makes them disappear from the current dashboard.This change keeps the existing dashboard path intact and backfills a chars-based estimate only when Ollama does not return token counts.
Scope
Testing
pnpm test -- src/agents/ollama-stream.test.tsRefs #38726