Skip to content

feat(blocks, ai): add prompt and completion tokens to the save dropdown#2070

Merged
baptisteArno merged 1 commit intomainfrom
issue-1873
Mar 15, 2025
Merged

feat(blocks, ai): add prompt and completion tokens to the save dropdown#2070
baptisteArno merged 1 commit intomainfrom
issue-1873

Conversation

@alexis-falaise
Copy link
Contributor

@alexis-falaise alexis-falaise commented Mar 14, 2025

This adds Prompt tokens and Completion tokens options to the save dropdown for AI blocks, based on the available token counts of the CompletionUsage response.

Capture d’écran 2025-03-14 à 16 37 55

@vercel
Copy link

vercel bot commented Mar 14, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
builder-v2 🛑 Canceled (Inspect) Mar 14, 2025 4:44pm
landing-page-v2 ❌ Failed (Inspect) Mar 14, 2025 4:44pm
viewer-v2 🔄 Building (Inspect) Visit Preview 💬 Add feedback Mar 14, 2025 4:44pm

@coderabbitai
Copy link

coderabbitai bot commented Mar 14, 2025

Walkthrough

This pull request extends token usage tracking across several modules. Updates include adding "Prompt tokens" and "Completion tokens" to predefined response values, mapping arrays, and conditional logic in chat completion functions. Additionally, legacy OpenAI integrations have been modified to construct and pass a structured tokens object (with total, prompt, and completion tokens) instead of a single numeric value. These changes provide a more granular capture and processing of token metrics.

Changes

File(s) Change Summary
packages/ai/src/constants.ts
packages/blocks/integrations/src/openai/constants.ts
Updated chatCompletionResponseValues array to include "Prompt tokens" and "Completion tokens".
packages/ai/src/parseChatCompletionOptions.ts Expanded responseMapping array to include "Prompt tokens" and "Completion tokens", adding additional response types.
packages/ai/src/runChatCompletion.ts Added conditional checks within the response mapping loop to handle and assign values for "Prompt tokens" and "Completion tokens" from the usage metrics.
packages/ai/src/runChatCompletionStream.ts Introduced additional logic in the onFinish callback to map "Prompt tokens" and "Completion tokens" from the streaming response’s usage to the corresponding variables.
packages/bot-engine/src/blocks/integrations/legacy/openai/createChatCompletionOpenAI.ts
packages/bot-engine/src/blocks/integrations/legacy/openai/resumeChatCompletion.ts
Modified token handling by constructing a tokens object (with totalTokens, promptTokens, and completionTokens), adding a new interface ResumeChatCompletionTokens, and updating method signatures.

Sequence Diagram(s)

sequenceDiagram
    participant RC as runChatCompletion
    participant Usage as Usage Object
    participant Var as Variable Store

    RC->>Usage: Retrieve usage metrics (total, prompt, completion)
    loop Process each mapping
        Usage-->>RC: Provide specific token metric
        alt Mapping is "Prompt tokens"
            RC->>Var: Assign usage.promptTokens
        else Mapping is "Completion tokens"
            RC->>Var: Assign usage.completionTokens
        else
            RC->>Var: Assign other token values (e.g., total tokens)
        end
    end
Loading
sequenceDiagram
    participant Client as createChatCompletionOpenAI
    participant ChatAPI as ChatCompletion Service
    participant Resume as resumeChatCompletion

    Client->>ChatAPI: Request chat completion (retrieve usage data)
    ChatAPI-->>Client: Return usage metrics (total, prompt, completion)
    Client->>Resume: Pass tokens object (with all token values)
    Resume->>Resume: Process tokens using ResumeChatCompletionTokens interface
    Resume-->>Client: Return processed chat completion result
Loading

Tip

⚡🧪 Multi-step agentic review comment chat (experimental)
  • We're introducing multi-step agentic chat in review comments. This experimental feature enhances review discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments.
    - To enable this feature, set early_access to true under in the settings.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 214410c35989ce25954dcbcde5127094a905bde3 and edc2499.

📒 Files selected for processing (7)
  • packages/ai/src/constants.ts (1 hunks)
  • packages/ai/src/parseChatCompletionOptions.ts (1 hunks)
  • packages/ai/src/runChatCompletion.ts (1 hunks)
  • packages/ai/src/runChatCompletionStream.ts (1 hunks)
  • packages/blocks/integrations/src/openai/constants.ts (1 hunks)
  • packages/bot-engine/src/blocks/integrations/legacy/openai/createChatCompletionOpenAI.ts (2 hunks)
  • packages/bot-engine/src/blocks/integrations/legacy/openai/resumeChatCompletion.ts (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • packages/ai/src/constants.ts
  • packages/blocks/integrations/src/openai/constants.ts
  • packages/ai/src/runChatCompletion.ts
  • packages/bot-engine/src/blocks/integrations/legacy/openai/createChatCompletionOpenAI.ts
  • packages/ai/src/runChatCompletionStream.ts
  • packages/ai/src/parseChatCompletionOptions.ts
  • packages/bot-engine/src/blocks/integrations/legacy/openai/resumeChatCompletion.ts

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 1

🛑 Comments failed to post (1)
.husky/pre-commit (1)

4-4: 🛠️ Refactor suggestion

Remove hardcoded user path to improve portability

The pre-commit hook contains a hardcoded path to the bun executable that is specific to your local machine (/Users/Alexis/.bun/bin/bun). This will break for other developers who clone the repository since they will have different user paths.

Replace the absolute path with just the command name to rely on the system's PATH:

- /Users/Alexis/.bun/bin/bun pre-commit
+ bun pre-commit
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

bun pre-commit

@baptisteArno baptisteArno merged commit 9a7624b into main Mar 15, 2025
6 of 7 checks passed
@baptisteArno baptisteArno deleted the issue-1873 branch March 15, 2025 07:23
@baptisteArno
Copy link
Owner

baptisteArno commented Mar 15, 2025

Congrats and thank you, first PR merged 🔥

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Be able to save both prompt and output tokens in AI blocks

2 participants