-
Notifications
You must be signed in to change notification settings - Fork 626
[Docs] Fix AI model response handling in server example #8459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Docs] Fix AI model response handling in server example #8459
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
4 Skipped Deployments
|
|
WalkthroughAn AI function call response structure has been modified to return an object containing both Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
Warning Review ran into problems🔥 ProblemsErrors were encountered while retrieving linked issues. Errors (1)
Comment |
How to use the Graphite Merge QueueAdd either label to this PR to merge it via the merge queue:
You must have a Graphite account in order to use the merge queue. Sign up using this link. An organization admin has enabled the Graphite Merge Queue in this repository. Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue. This stack of pull requests is managed by Graphite. Learn more about stacking. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #8459 +/- ##
=======================================
Coverage 54.82% 54.82%
=======================================
Files 919 919
Lines 60875 60875
Branches 4141 4141
=======================================
Hits 33374 33374
Misses 27399 27399
Partials 102 102
🚀 New features to boost your workflow:
|
size-limit report 📦
|

PR-Codex overview
This PR updates the
callExpensiveAIModelfunction to return anansweralong withtokensUsed, and modifies the response returned in thesettleResultto include thisanswerinstead of the previous result.Detailed summary
const { tokensUsed }toconst { answer, tokensUsed }in thecallExpensiveAIModelcall.answerinstead ofresultin the response.Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.