-
Notifications
You must be signed in to change notification settings - Fork 7
added patching logic for prompt metadata #462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds metadata processing to generated JavaScript (imports Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev
participant CodeGen as CodeGenerator (JS gen)
participant Runtime as Generated JS
participant Meta as processPromptMetadata
participant Crypto as node:crypto
Dev->>CodeGen: Request prompt/system code
CodeGen-->>Runtime: Emit code (imports interpolateTemplate, processPromptMetadata, crypto)
Runtime->>Runtime: interpolateTemplate(template, vars) => compiled
alt variables exist
Runtime->>Meta: processPromptMetadata({slug, compiled, template, variables})
else no variables
Runtime->>Meta: processPromptMetadata({slug, compiled, template, {}})
end
Runtime-->>Dev: return compiled
sequenceDiagram
autonumber
participant App
participant Hook as generateText (patched)
participant PP as PatchPortal (dynamic import)
participant Crypto as node:crypto
participant SDK as AI SDK
App->>Hook: generateText(_args)
Hook->>Hook: stringify/coerce prompt/system to strings
Hook->>Crypto: sha256(String(_args[0].system || ''))
Crypto-->>Hook: systemHash
Hook->>Crypto: sha256(String(_args[0].prompt || ''))
Crypto-->>Hook: promptHash
Hook->>PP: PatchPortal.getData(key from hashes)
PP-->>Hook: patchPortalData
Hook->>Hook: assemble agentuityPromptMetadata (promptId, patchPortalData, compiledHash, patchPortalKey)
Hook->>SDK: enable experimental_telemetry with metadata { agentuity: { prompts: [...] } }
Hook->>SDK: call original generateText with updated args
SDK-->>App: result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧠 Learnings (1)📚 Learning: 2025-10-03T22:00:33.772ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
🔇 Additional comments (1)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/bundler/prompts/code_generator.go(3 hunks)internal/bundler/vercel_ai.go(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
internal/bundler/*.go
📄 CodeRabbit inference engine (.cursor/rules/code-generation.mdc)
internal/bundler/*.go: Generate into node_modules/@agentuity/sdk/dist/generated (fallback to node_modules/@agentuity/sdk/src/generated), not into source SDK paths
Scan src/prompts/ and prompts/ for .yaml/.yml files and merge into a single generated output
Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs
Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Use slugs directly in generated code; quote property names with hyphens; access via prompts['slug-name']; use lowerCamel for JS variable names
Prefer simple, maintainable types in generated definitions; avoid overly complex generics that cause compilation issues
Before writing, verify @agentuity/sdk exists in node_modules and mkdir -p the generated dir; error clearly if missing
Implement FindSDKGeneratedDir logic that prefers dist then src under @agentuity/sdk and returns the first writable location
Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Process both .yaml and .yml files for prompts and preserve original template content exactly
Quote TypeScript interface properties that include hyphens when emitting interfaces
Files:
internal/bundler/vercel_ai.go
🧠 Learnings (3)
📚 Learning: 2025-10-03T22:00:33.752Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.752Z
Learning: Applies to internal/bundler/*.go : Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Applied to files:
internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.752Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.752Z
Learning: Applies to internal/bundler/*.go : Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs
Applied to files:
internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.752Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.752Z
Learning: Applies to internal/bundler/*.go : Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Applied to files:
internal/bundler/prompts/code_generator.go
🧬 Code graph analysis (1)
internal/bundler/prompts/code_generator.go (1)
internal/bundler/prompts/types.go (1)
Prompt(4-15)
🪛 GitHub Actions: Go Build and Test
internal/bundler/prompts/code_generator.go
[error] 221-221: fmt.Sprintf format %q reads arg #5, but call has 4 args
🪛 GitHub Check: Build and Test (blacksmith-4vcpu-ubuntu-2204-arm)
internal/bundler/prompts/code_generator.go
[failure] 221-221:
fmt.Sprintf format %q reads arg #5, but call has 4 args
🪛 GitHub Check: Build and Test (blacksmith-4vcpu-ubuntu-2204)
internal/bundler/prompts/code_generator.go
[failure] 221-221:
fmt.Sprintf format %q reads arg #5, but call has 4 args
| } | ||
| paramStr := strings.Join(paramNames, ", ") | ||
|
|
||
| // Generate variables object for metadata |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would just json stringify this and pass that in to variables as-is below and you don't have to worry about string escape issues here.
internal/bundler/vercel_ai.go
Outdated
| compiledHash: compiledHash, | ||
| patchPortalKey: key | ||
| }; | ||
| opts.experimental_telemetry = { isEnabled: true, metadata: metadata }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to check ai sdk v5 and see if still experimental? we might have to do a version switch here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://ai-sdk.dev/docs/ai-sdk-core/telemetry
looks like it is still experimental
| func init() { | ||
| var vercelTelemetryPatch = generateJSArgsPatch(0, `experimental_telemetry: { isEnabled: true }`) | ||
| // Generate PatchPortal integration patch with hashing and telemetry | ||
| var patchPortalPatch = ` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i assume we'll pull all this console.logs out once you have it stable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to swap it out for this internal logger I created specifically for debugging these patches we do. If you look at the sdk you should see it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by default it is silent but I was think we could add log attributes to these and alway have them log out in cloud but filter them out when we query for the logs in the ui.
internal/bundler/vercel_ai.go
Outdated
| compiledHash: compiledHash, | ||
| patchPortalKey: key | ||
| }; | ||
| opts.experimental_telemetry = { isEnabled: true, metadata: metadata }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| opts.experimental_telemetry = { isEnabled: true, metadata: metadata }; | |
| opts.experimental_telemetry = { isEnabled: true, metadata }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
internal/bundler/prompts/code_generator.go (1)
31-32: Drop unusedcryptoimportWe import
* as cryptoin the generated JavaScript but never reference it anywhere, so every bundle will now carry an unnecessary dependency. Please remove the import (and any related dead code, if reintroduced later) to keep the output lean.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/bundler/prompts/code_generator.go(3 hunks)internal/bundler/vercel_ai.go(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
internal/bundler/*.go
📄 CodeRabbit inference engine (.cursor/rules/code-generation.mdc)
internal/bundler/*.go: Generate into node_modules/@agentuity/sdk/dist/generated (fallback to node_modules/@agentuity/sdk/src/generated), not into source SDK paths
Scan src/prompts/ and prompts/ for .yaml/.yml files and merge into a single generated output
Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs
Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Use slugs directly in generated code; quote property names with hyphens; access via prompts['slug-name']; use lowerCamel for JS variable names
Prefer simple, maintainable types in generated definitions; avoid overly complex generics that cause compilation issues
Before writing, verify @agentuity/sdk exists in node_modules and mkdir -p the generated dir; error clearly if missing
Implement FindSDKGeneratedDir logic that prefers dist then src under @agentuity/sdk and returns the first writable location
Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Process both .yaml and .yml files for prompts and preserve original template content exactly
Quote TypeScript interface properties that include hyphens when emitting interfaces
Files:
internal/bundler/vercel_ai.go
🧠 Learnings (3)
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to internal/bundler/*.go : Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Applied to files:
internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to internal/bundler/*.go : Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Applied to files:
internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to src/apis/**/*.ts : Do not rely on optional chaining for generated fields; treat system and prompt as always-present strings
Applied to files:
internal/bundler/vercel_ai.go
🧬 Code graph analysis (1)
internal/bundler/prompts/code_generator.go (1)
internal/bundler/prompts/types.go (1)
Prompt(4-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build and Test (macos-latest)
- GitHub Check: Analyze (go)
| // Get patch data using the same key format as processPromptMetadata | ||
| const key = 'prompt:' + compiledPromptHash; | ||
| internal.debug('🔍 Looking for key:', key); | ||
| const patchData = await patchPortal.get(key); | ||
| internal.debug('🔍 Retrieved patch data:', patchData); | ||
| metadata.push(patchData); | ||
| } | ||
| if (_args[0]?.prompt) { | ||
| const prompt = _args[0]?.prompt || _args[0]?.messages || ''; | ||
| const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt); | ||
| internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...'); | ||
| // Generate hash for the compiled prompt (same as processPromptMetadata uses) | ||
| compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex'); | ||
| internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash); | ||
| // Get patch data using the same key format as processPromptMetadata | ||
| const key = 'prompt:' + compiledPromptHash; | ||
| internal.debug('🔍 Looking for key:', key); | ||
| const patchData = await patchPortal.get(key); | ||
| internal.debug('🔍 Retrieved patch data:', patchData); | ||
| metadata.push(patchData); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix system metadata lookup hash
Line 56 builds the PatchPortal key with compiledPromptHash, but when the system block executes, compiledPromptHash is still an empty string. That means we always query 'prompt:' and never hit the system metadata we just hashed into compiledSystemHash, so telemetry for system prompts is effectively broken. Switch the lookup to use the system hash (and matching key prefix, if applicable) so the retrieved data actually corresponds to the system template.
- const key = 'prompt:' + compiledPromptHash;
+ const key = 'system:' + compiledSystemHash;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get patch data using the same key format as processPromptMetadata | |
| const key = 'prompt:' + compiledPromptHash; | |
| internal.debug('🔍 Looking for key:', key); | |
| const patchData = await patchPortal.get(key); | |
| internal.debug('🔍 Retrieved patch data:', patchData); | |
| metadata.push(patchData); | |
| } | |
| if (_args[0]?.prompt) { | |
| const prompt = _args[0]?.prompt || _args[0]?.messages || ''; | |
| const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt); | |
| internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...'); | |
| // Generate hash for the compiled prompt (same as processPromptMetadata uses) | |
| compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex'); | |
| internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash); | |
| // Get patch data using the same key format as processPromptMetadata | |
| const key = 'prompt:' + compiledPromptHash; | |
| internal.debug('🔍 Looking for key:', key); | |
| const patchData = await patchPortal.get(key); | |
| internal.debug('🔍 Retrieved patch data:', patchData); | |
| metadata.push(patchData); | |
| } | |
| // Get patch data using the same key format as processPromptMetadata | |
| const key = 'system:' + compiledSystemHash; | |
| internal.debug('🔍 Looking for key:', key); | |
| const patchData = await patchPortal.get(key); | |
| internal.debug('🔍 Retrieved patch data:', patchData); | |
| metadata.push(patchData); | |
| } | |
| if (_args[0]?.prompt) { | |
| const prompt = _args[0]?.prompt || _args[0]?.messages || ''; | |
| const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt); | |
| internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...'); | |
| // Generate hash for the compiled prompt (same as processPromptMetadata uses) | |
| compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex'); | |
| internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash); | |
| // Get patch data using the same key format as processPromptMetadata | |
| const key = 'prompt:' + compiledPromptHash; | |
| internal.debug('🔍 Looking for key:', key); | |
| const patchData = await patchPortal.get(key); | |
| internal.debug('🔍 Retrieved patch data:', patchData); | |
| metadata.push(patchData); | |
| } |
🤖 Prompt for AI Agents
In internal/bundler/vercel_ai.go around lines 55 to 78, the lookup for system
metadata incorrectly uses compiledPromptHash and the 'prompt:' key prefix;
change the lookup to use the system hash and matching key prefix by building the
key from compiledSystemHash (e.g., 'system:'+compiledSystemHash) before calling
patchPortal.get so the retrieved patch data corresponds to the system template
you just hashed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
internal/bundler/vercel_ai.go (1)
55-64: Fix system metadata lookup hash
compiledPromptHashis still empty when this branch runs, so every lookup hits'prompt:'with no suffix and we never fetch the system metadata we just hashed. Use the system hash (and matching key prefix, if required) so the retrieved data corresponds to the system template.- const key = 'prompt:' + compiledPromptHash; + const key = 'prompt:' + compiledSystemHash;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/bundler/vercel_ai.go(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
internal/bundler/*.go
📄 CodeRabbit inference engine (.cursor/rules/code-generation.mdc)
internal/bundler/*.go: Generate into node_modules/@agentuity/sdk/dist/generated (fallback to node_modules/@agentuity/sdk/src/generated), not into source SDK paths
Scan src/prompts/ and prompts/ for .yaml/.yml files and merge into a single generated output
Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs
Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Use slugs directly in generated code; quote property names with hyphens; access via prompts['slug-name']; use lowerCamel for JS variable names
Prefer simple, maintainable types in generated definitions; avoid overly complex generics that cause compilation issues
Before writing, verify @agentuity/sdk exists in node_modules and mkdir -p the generated dir; error clearly if missing
Implement FindSDKGeneratedDir logic that prefers dist then src under @agentuity/sdk and returns the first writable location
Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Process both .yaml and .yml files for prompts and preserve original template content exactly
Quote TypeScript interface properties that include hyphens when emitting interfaces
Files:
internal/bundler/vercel_ai.go
🧠 Learnings (1)
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to src/apis/**/*.ts : Do not rely on optional chaining for generated fields; treat system and prompt as always-present strings
Applied to files:
internal/bundler/vercel_ai.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build and Test (macos-latest)
- GitHub Check: Analyze (go)
| if (_args[0]?.prompt) { | ||
| const prompt = _args[0]?.prompt || _args[0]?.messages || ''; | ||
| const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt); | ||
| internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...'); | ||
| // Generate hash for the compiled prompt (same as processPromptMetadata uses) | ||
| compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex'); | ||
| internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash); | ||
| // Get patch data using the same key format as processPromptMetadata | ||
| const key = 'prompt:' + compiledPromptHash; | ||
| internal.debug('🔍 Looking for key:', key); | ||
| const patchData = await patchPortal.get(key); | ||
| if (patchData) { | ||
| internal.debug('🔍 Retrieved patch data:', patchData); | ||
| agentuityPromptMetadata.push(...patchData); | ||
| } else { | ||
| internal.debug('ℹ️ No patch data found for compiled hash:', compiledPromptHash); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle message-only invocations
When callers pass messages without prompt, this branch is skipped entirely, so no hash is generated and no metadata is fetched. Check for either prompt or messages before building the string so chat-style requests still participate in PatchPortal telemetry.
- if (_args[0]?.prompt) {
- const prompt = _args[0]?.prompt || _args[0]?.messages || '';
+ if (_args[0]?.prompt || _args[0]?.messages) {
+ const prompt = _args[0]?.prompt ?? _args[0]?.messages ?? '';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (_args[0]?.prompt) { | |
| const prompt = _args[0]?.prompt || _args[0]?.messages || ''; | |
| const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt); | |
| internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...'); | |
| // Generate hash for the compiled prompt (same as processPromptMetadata uses) | |
| compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex'); | |
| internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash); | |
| // Get patch data using the same key format as processPromptMetadata | |
| const key = 'prompt:' + compiledPromptHash; | |
| internal.debug('🔍 Looking for key:', key); | |
| const patchData = await patchPortal.get(key); | |
| if (patchData) { | |
| internal.debug('🔍 Retrieved patch data:', patchData); | |
| agentuityPromptMetadata.push(...patchData); | |
| } else { | |
| internal.debug('ℹ️ No patch data found for compiled hash:', compiledPromptHash); | |
| } | |
| } | |
| if (_args[0]?.prompt || _args[0]?.messages) { | |
| const prompt = _args[0]?.prompt ?? _args[0]?.messages ?? ''; | |
| const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt); | |
| internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...'); | |
| // Generate hash for the compiled prompt (same as processPromptMetadata uses) | |
| compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex'); | |
| internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash); | |
| // Get patch data using the same key format as processPromptMetadata | |
| const key = 'prompt:' + compiledPromptHash; | |
| internal.debug('🔍 Looking for key:', key); | |
| const patchData = await patchPortal.get(key); | |
| if (patchData) { | |
| internal.debug('🔍 Retrieved patch data:', patchData); | |
| agentuityPromptMetadata.push(...patchData); | |
| } else { | |
| internal.debug('ℹ️ No patch data found for compiled hash:', compiledPromptHash); | |
| } | |
| } |
🤖 Prompt for AI Agents
In internal/bundler/vercel_ai.go around lines 68 to 86, the current branch only
runs when _args[0].prompt exists so message-only calls are ignored; change the
guard to check for either prompt or messages (if (_args[0]?.prompt ||
_args[0]?.messages)), set promptString from prompt if present otherwise from
messages (stringify messages when not a string), then compute the
compiledPromptHash and proceed exactly as before so chat-style requests generate
the same hash and fetch PatchPortal metadata.
Summary by CodeRabbit
New Features
Chores
Tests