Skip to content

Conversation

@potofpie
Copy link
Member

@potofpie potofpie commented Oct 4, 2025

Summary by CodeRabbit

  • New Features

    • Generated JavaScript now performs prompt metadata processing (variable mapping and compiled-content handling) while keeping the public API unchanged.
    • Vercel AI generateText telemetry now fetches and attaches prompt IDs/hashes and aggregated prompt metadata for improved observability.
  • Chores

    • Reduced SDK generation log verbosity from Info to Debug.
  • Tests

    • Updated tests to expect metadata processing in generated outputs.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 4, 2025

Walkthrough

Adds metadata processing to generated JavaScript (imports processPromptMetadata and node:crypto, builds variable maps, interpolates templates, invokes processPromptMetadata, returns compiled strings). Replaces Vercel telemetry patch with a dynamic PatchPortal fetch that aggregates prompt metadata and injects it into experimental_telemetry. A log was downgraded from Info to Debug.

Changes

Cohort / File(s) Summary
Bundler: Prompt metadata generation
internal/bundler/prompts/code_generator.go
JS generation now imports processPromptMetadata and node:crypto; constructs variables objects for system/prompt templates, interpolates templates, calls processPromptMetadata({ slug, compiled, template, variables }) (empty variables when none), and returns compiled output. TypeScript public API unchanged.
Bundler: Logging change
internal/bundler/prompts/prompts.go
Changed SDK generation log level from Info to Debug in ProcessPrompts.
Vercel AI: PatchPortal telemetry integration
internal/bundler/vercel_ai.go
Replaces prior telemetry patch with a new patchPortalPatch that dynamically imports PatchPortal and internal utils, computes SHA-256 hashes of system/prompt inputs to derive keys, fetches PatchPortal data, aggregates returned data into prompt metadata (agentuity.prompts), enables experimental_telemetry with that metadata, coerces prompt/system to strings, and injects the patch into the generateText flow (previous vercelTelemetryPatch becomes baseline/empty).
Tests: generated JS import expectation
internal/bundler/prompts/code_generator_test.go
Test updated to expect generated JS to import both interpolateTemplate and processPromptMetadata from ../../../index.js.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Dev
  participant CodeGen as CodeGenerator (JS gen)
  participant Runtime as Generated JS
  participant Meta as processPromptMetadata
  participant Crypto as node:crypto

  Dev->>CodeGen: Request prompt/system code
  CodeGen-->>Runtime: Emit code (imports interpolateTemplate, processPromptMetadata, crypto)
  Runtime->>Runtime: interpolateTemplate(template, vars) => compiled
  alt variables exist
    Runtime->>Meta: processPromptMetadata({slug, compiled, template, variables})
  else no variables
    Runtime->>Meta: processPromptMetadata({slug, compiled, template, {}})
  end
  Runtime-->>Dev: return compiled
Loading
sequenceDiagram
  autonumber
  participant App
  participant Hook as generateText (patched)
  participant PP as PatchPortal (dynamic import)
  participant Crypto as node:crypto
  participant SDK as AI SDK

  App->>Hook: generateText(_args)
  Hook->>Hook: stringify/coerce prompt/system to strings
  Hook->>Crypto: sha256(String(_args[0].system || ''))
  Crypto-->>Hook: systemHash
  Hook->>Crypto: sha256(String(_args[0].prompt || ''))
  Crypto-->>Hook: promptHash
  Hook->>PP: PatchPortal.getData(key from hashes)
  PP-->>Hook: patchPortalData
  Hook->>Hook: assemble agentuityPromptMetadata (promptId, patchPortalData, compiledHash, patchPortalKey)
  Hook->>SDK: enable experimental_telemetry with metadata { agentuity: { prompts: [...] } }
  Hook->>SDK: call original generateText with updated args
  SDK-->>App: result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

I nibble on templates, a whisker of code,
Hashes crunch carrots down an unseen road.
PatchPortal hums secrets soft and deep,
Variables wake from their cozy sleep.
I hop—metadata snug, tucked deep. 🥕🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly and succinctly summarizes the main change by stating that patching logic for prompt metadata has been added. It directly reflects the updates to the bundler code generator and Vercel AI integration without unnecessary detail or ambiguity. This concise phrasing allows reviewers to immediately grasp the pull request’s focus.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-prompt-meta-data

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1602388 and 012d4e1.

📒 Files selected for processing (1)
  • internal/bundler/prompts/prompts.go (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to internal/bundler/*.go : Never write hardcoded prompt content in the generator or source; derive from YAML inputs only

Applied to files:

  • internal/bundler/prompts/prompts.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build and Test (macos-latest)
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
internal/bundler/prompts/prompts.go (1)

136-136: LGTM! Logging consistency improved.

The log level change to Debug aligns with all other log statements in this function (lines 86, 90, 106, 109, 117), making the logging behavior consistent. Debug level is appropriate for code generation details.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ebd9d74 and 73f1409.

📒 Files selected for processing (2)
  • internal/bundler/prompts/code_generator.go (3 hunks)
  • internal/bundler/vercel_ai.go (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
internal/bundler/*.go

📄 CodeRabbit inference engine (.cursor/rules/code-generation.mdc)

internal/bundler/*.go: Generate into node_modules/@agentuity/sdk/dist/generated (fallback to node_modules/@agentuity/sdk/src/generated), not into source SDK paths
Scan src/prompts/ and prompts/ for .yaml/.yml files and merge into a single generated output
Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs
Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Use slugs directly in generated code; quote property names with hyphens; access via prompts['slug-name']; use lowerCamel for JS variable names
Prefer simple, maintainable types in generated definitions; avoid overly complex generics that cause compilation issues
Before writing, verify @agentuity/sdk exists in node_modules and mkdir -p the generated dir; error clearly if missing
Implement FindSDKGeneratedDir logic that prefers dist then src under @agentuity/sdk and returns the first writable location
Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Process both .yaml and .yml files for prompts and preserve original template content exactly
Quote TypeScript interface properties that include hyphens when emitting interfaces

Files:

  • internal/bundler/vercel_ai.go
🧠 Learnings (3)
📚 Learning: 2025-10-03T22:00:33.752Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.752Z
Learning: Applies to internal/bundler/*.go : Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars

Applied to files:

  • internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.752Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.752Z
Learning: Applies to internal/bundler/*.go : Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs

Applied to files:

  • internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.752Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.752Z
Learning: Applies to internal/bundler/*.go : Never write hardcoded prompt content in the generator or source; derive from YAML inputs only

Applied to files:

  • internal/bundler/prompts/code_generator.go
🧬 Code graph analysis (1)
internal/bundler/prompts/code_generator.go (1)
internal/bundler/prompts/types.go (1)
  • Prompt (4-15)
🪛 GitHub Actions: Go Build and Test
internal/bundler/prompts/code_generator.go

[error] 221-221: fmt.Sprintf format %q reads arg #5, but call has 4 args

🪛 GitHub Check: Build and Test (blacksmith-4vcpu-ubuntu-2204-arm)
internal/bundler/prompts/code_generator.go

[failure] 221-221:
fmt.Sprintf format %q reads arg #5, but call has 4 args

🪛 GitHub Check: Build and Test (blacksmith-4vcpu-ubuntu-2204)
internal/bundler/prompts/code_generator.go

[failure] 221-221:
fmt.Sprintf format %q reads arg #5, but call has 4 args

jhaynie
jhaynie previously requested changes Oct 6, 2025
}
paramStr := strings.Join(paramNames, ", ")

// Generate variables object for metadata
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would just json stringify this and pass that in to variables as-is below and you don't have to worry about string escape issues here.

compiledHash: compiledHash,
patchPortalKey: key
};
opts.experimental_telemetry = { isEnabled: true, metadata: metadata };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to check ai sdk v5 and see if still experimental? we might have to do a version switch here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://ai-sdk.dev/docs/ai-sdk-core/telemetry

looks like it is still experimental

func init() {
var vercelTelemetryPatch = generateJSArgsPatch(0, `experimental_telemetry: { isEnabled: true }`)
// Generate PatchPortal integration patch with hashing and telemetry
var patchPortalPatch = `
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i assume we'll pull all this console.logs out once you have it stable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to swap it out for this internal logger I created specifically for debugging these patches we do. If you look at the sdk you should see it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by default it is silent but I was think we could add log attributes to these and alway have them log out in cloud but filter them out when we query for the logs in the ui.

compiledHash: compiledHash,
patchPortalKey: key
};
opts.experimental_telemetry = { isEnabled: true, metadata: metadata };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
opts.experimental_telemetry = { isEnabled: true, metadata: metadata };
opts.experimental_telemetry = { isEnabled: true, metadata };

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
internal/bundler/prompts/code_generator.go (1)

31-32: Drop unused crypto import

We import * as crypto in the generated JavaScript but never reference it anywhere, so every bundle will now carry an unnecessary dependency. Please remove the import (and any related dead code, if reintroduced later) to keep the output lean.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9db85ad and bc9d4ec.

📒 Files selected for processing (2)
  • internal/bundler/prompts/code_generator.go (3 hunks)
  • internal/bundler/vercel_ai.go (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
internal/bundler/*.go

📄 CodeRabbit inference engine (.cursor/rules/code-generation.mdc)

internal/bundler/*.go: Generate into node_modules/@agentuity/sdk/dist/generated (fallback to node_modules/@agentuity/sdk/src/generated), not into source SDK paths
Scan src/prompts/ and prompts/ for .yaml/.yml files and merge into a single generated output
Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs
Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Use slugs directly in generated code; quote property names with hyphens; access via prompts['slug-name']; use lowerCamel for JS variable names
Prefer simple, maintainable types in generated definitions; avoid overly complex generics that cause compilation issues
Before writing, verify @agentuity/sdk exists in node_modules and mkdir -p the generated dir; error clearly if missing
Implement FindSDKGeneratedDir logic that prefers dist then src under @agentuity/sdk and returns the first writable location
Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Process both .yaml and .yml files for prompts and preserve original template content exactly
Quote TypeScript interface properties that include hyphens when emitting interfaces

Files:

  • internal/bundler/vercel_ai.go
🧠 Learnings (3)
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to internal/bundler/*.go : Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars

Applied to files:

  • internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to internal/bundler/*.go : Never write hardcoded prompt content in the generator or source; derive from YAML inputs only

Applied to files:

  • internal/bundler/prompts/code_generator.go
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to src/apis/**/*.ts : Do not rely on optional chaining for generated fields; treat system and prompt as always-present strings

Applied to files:

  • internal/bundler/vercel_ai.go
🧬 Code graph analysis (1)
internal/bundler/prompts/code_generator.go (1)
internal/bundler/prompts/types.go (1)
  • Prompt (4-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build and Test (macos-latest)
  • GitHub Check: Analyze (go)

Comment on lines 55 to 78
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
internal.debug('🔍 Retrieved patch data:', patchData);
metadata.push(patchData);
}
if (_args[0]?.prompt) {
const prompt = _args[0]?.prompt || _args[0]?.messages || '';
const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt);
internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...');
// Generate hash for the compiled prompt (same as processPromptMetadata uses)
compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex');
internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash);
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
internal.debug('🔍 Retrieved patch data:', patchData);
metadata.push(patchData);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix system metadata lookup hash

Line 56 builds the PatchPortal key with compiledPromptHash, but when the system block executes, compiledPromptHash is still an empty string. That means we always query 'prompt:' and never hit the system metadata we just hashed into compiledSystemHash, so telemetry for system prompts is effectively broken. Switch the lookup to use the system hash (and matching key prefix, if applicable) so the retrieved data actually corresponds to the system template.

-			const key = 'prompt:' + compiledPromptHash;
+			const key = 'system:' + compiledSystemHash;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
internal.debug('🔍 Retrieved patch data:', patchData);
metadata.push(patchData);
}
if (_args[0]?.prompt) {
const prompt = _args[0]?.prompt || _args[0]?.messages || '';
const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt);
internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...');
// Generate hash for the compiled prompt (same as processPromptMetadata uses)
compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex');
internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash);
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
internal.debug('🔍 Retrieved patch data:', patchData);
metadata.push(patchData);
}
// Get patch data using the same key format as processPromptMetadata
const key = 'system:' + compiledSystemHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
internal.debug('🔍 Retrieved patch data:', patchData);
metadata.push(patchData);
}
if (_args[0]?.prompt) {
const prompt = _args[0]?.prompt || _args[0]?.messages || '';
const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt);
internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...');
// Generate hash for the compiled prompt (same as processPromptMetadata uses)
compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex');
internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash);
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
internal.debug('🔍 Retrieved patch data:', patchData);
metadata.push(patchData);
}
🤖 Prompt for AI Agents
In internal/bundler/vercel_ai.go around lines 55 to 78, the lookup for system
metadata incorrectly uses compiledPromptHash and the 'prompt:' key prefix;
change the lookup to use the system hash and matching key prefix by building the
key from compiledSystemHash (e.g., 'system:'+compiledSystemHash) before calling
patchPortal.get so the retrieved patch data corresponds to the system template
you just hashed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
internal/bundler/vercel_ai.go (1)

55-64: Fix system metadata lookup hash

compiledPromptHash is still empty when this branch runs, so every lookup hits 'prompt:' with no suffix and we never fetch the system metadata we just hashed. Use the system hash (and matching key prefix, if required) so the retrieved data corresponds to the system template.

-			const key = 'prompt:' + compiledPromptHash;
+			const key = 'prompt:' + compiledSystemHash;
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc9d4ec and 1602388.

📒 Files selected for processing (1)
  • internal/bundler/vercel_ai.go (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
internal/bundler/*.go

📄 CodeRabbit inference engine (.cursor/rules/code-generation.mdc)

internal/bundler/*.go: Generate into node_modules/@agentuity/sdk/dist/generated (fallback to node_modules/@agentuity/sdk/src/generated), not into source SDK paths
Scan src/prompts/ and prompts/ for .yaml/.yml files and merge into a single generated output
Always emit both system and prompt fields; emit empty strings for missing values; never require optional chaining in generated outputs
Include JSDoc with actual prompt content on generated system and prompt functions; wrap long lines at 80 chars
Use slugs directly in generated code; quote property names with hyphens; access via prompts['slug-name']; use lowerCamel for JS variable names
Prefer simple, maintainable types in generated definitions; avoid overly complex generics that cause compilation issues
Before writing, verify @agentuity/sdk exists in node_modules and mkdir -p the generated dir; error clearly if missing
Implement FindSDKGeneratedDir logic that prefers dist then src under @agentuity/sdk and returns the first writable location
Never write hardcoded prompt content in the generator or source; derive from YAML inputs only
Process both .yaml and .yml files for prompts and preserve original template content exactly
Quote TypeScript interface properties that include hyphens when emitting interfaces

Files:

  • internal/bundler/vercel_ai.go
🧠 Learnings (1)
📚 Learning: 2025-10-03T22:00:33.772Z
Learnt from: CR
PR: agentuity/cli#0
File: .cursor/rules/code-generation.mdc:0-0
Timestamp: 2025-10-03T22:00:33.772Z
Learning: Applies to src/apis/**/*.ts : Do not rely on optional chaining for generated fields; treat system and prompt as always-present strings

Applied to files:

  • internal/bundler/vercel_ai.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build and Test (macos-latest)
  • GitHub Check: Analyze (go)

Comment on lines +68 to +86
if (_args[0]?.prompt) {
const prompt = _args[0]?.prompt || _args[0]?.messages || '';
const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt);
internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...');
// Generate hash for the compiled prompt (same as processPromptMetadata uses)
compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex');
internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash);
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
if (patchData) {
internal.debug('🔍 Retrieved patch data:', patchData);
agentuityPromptMetadata.push(...patchData);
} else {
internal.debug('ℹ️ No patch data found for compiled hash:', compiledPromptHash);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle message-only invocations

When callers pass messages without prompt, this branch is skipped entirely, so no hash is generated and no metadata is fetched. Check for either prompt or messages before building the string so chat-style requests still participate in PatchPortal telemetry.

-		if (_args[0]?.prompt) {
-			const prompt = _args[0]?.prompt || _args[0]?.messages || '';
+		if (_args[0]?.prompt || _args[0]?.messages) {
+			const prompt = _args[0]?.prompt ?? _args[0]?.messages ?? '';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (_args[0]?.prompt) {
const prompt = _args[0]?.prompt || _args[0]?.messages || '';
const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt);
internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...');
// Generate hash for the compiled prompt (same as processPromptMetadata uses)
compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex');
internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash);
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
if (patchData) {
internal.debug('🔍 Retrieved patch data:', patchData);
agentuityPromptMetadata.push(...patchData);
} else {
internal.debug('ℹ️ No patch data found for compiled hash:', compiledPromptHash);
}
}
if (_args[0]?.prompt || _args[0]?.messages) {
const prompt = _args[0]?.prompt ?? _args[0]?.messages ?? '';
const promptString = typeof prompt === 'string' ? prompt : JSON.stringify(prompt);
internal.debug('📝 Extracted prompt:', promptString.substring(0, 100) + '...');
// Generate hash for the compiled prompt (same as processPromptMetadata uses)
compiledPromptHash = crypto.createHash('sha256').update(promptString).digest('hex');
internal.debug('🔑 PROMPT Generated compiled hash:', compiledPromptHash);
// Get patch data using the same key format as processPromptMetadata
const key = 'prompt:' + compiledPromptHash;
internal.debug('🔍 Looking for key:', key);
const patchData = await patchPortal.get(key);
if (patchData) {
internal.debug('🔍 Retrieved patch data:', patchData);
agentuityPromptMetadata.push(...patchData);
} else {
internal.debug('ℹ️ No patch data found for compiled hash:', compiledPromptHash);
}
}
🤖 Prompt for AI Agents
In internal/bundler/vercel_ai.go around lines 68 to 86, the current branch only
runs when _args[0].prompt exists so message-only calls are ignored; change the
guard to check for either prompt or messages (if (_args[0]?.prompt ||
_args[0]?.messages)), set promptString from prompt if present otherwise from
messages (stringify messages when not a string), then compute the
compiledPromptHash and proceed exactly as before so chat-style requests generate
the same hash and fetch PatchPortal metadata.

@potofpie potofpie dismissed jhaynie’s stale review October 8, 2025 18:35

I did the feedback

@potofpie potofpie merged commit 510019a into main Oct 8, 2025
14 checks passed
@potofpie potofpie deleted the add-prompt-meta-data branch October 8, 2025 18:35
@devin-ai-integration devin-ai-integration bot mentioned this pull request Oct 10, 2025
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants