Token Count Support for the Completions and Experiment#24
Token Count Support for the Completions and Experiment#24jacekzimonski wants to merge 6 commits intomainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
@anyacherniss |
|
Claude finished @jacekzimonski's task —— View job 📋 Code Review: Token Count Support for Completions and Experiment✅ Overall Assessment: Excellent ImplementationThis PR successfully implements reasoning token count tracking with high-quality code, comprehensive testing, and proper database design. The implementation is clean, well-documented, and follows established patterns throughout the codebase. 🔧 Backend Implementation - Excellent ⭐⭐⭐⭐⭐Domain Model Changes (
|
@claude fixed all the Minor Suggestions, please look and confirm |
|
Claude finished @jacekzimonski's task —— View job ✅ Minor Suggestions Review CompleteAll previously identified minor suggestions have been successfully addressed: Todo List:
✅ CONFIRMED: All Minor Suggestions Addressed1. Database Index Added ✅
2. Unit Tests Added ✅
3. Documentation Added ✅
4. UI Implementation Verified ✅
Quality AssessmentThe fixes demonstrate excellent attention to detail:
No remaining issues found. The implementation is production-ready and follows best practices throughout. |
anyacherniss
left a comment
There was a problem hiding this comment.
This passes QA for me, but will not merge until Guillaume reviews
|
@anyacherniss |
|
The backend changed a lot since this proposed change, so a different implementation if we move with this functionality will be needed. Becasue of that I will be closing the PR |
|
@anyacherniss |
Request from Slack:
do we show the reasoning tokens COUNT in another AI? for models where the reasoning text isn’t available, it would help to show the reasoning tokens [@jacek.zimonski](https://workflowaihq.slack.com/team/U05SZFUQU75)To get those values needed to:
It would work without the SQL (Clerk), but we would loose the display for the Experiment Page and have it only in the CompletionModal. Reason for that is that when returning the experiment explicitly calculating of the traces information (where we store the token costs) was blocked.
In the frontend we show the Token Count for the Reasoning Models is it's more then 0 in: