ref(ai-trace): Consolidate AI input/output format fallbacks#114028
ref(ai-trace): Consolidate AI input/output format fallbacks#114028obostjancic merged 7 commits intomasterfrom
Conversation
Unify per-attribute transforms behind a single normalizer so any supported
shape (parts, content, {messages} wrapper, {system, prompt}, plain string)
works on every gen_ai.* attribute. Outputs filter to role=assistant when
roles exist, else last-wins.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 3 total unresolved issues (including 2 from previous reviews).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 35ce90b. Configure here.
Move aiMessageNormalizer to insights/pages/agents/utils so it's a shared
agent utility, then migrate parseUserContent and parseAssistantContent in
the conversations surface to use it. Plain strings on gen_ai.output.messages
are now rendered as the assistant response instead of being silently
dropped, and OpenRouter / parts / {system,prompt} formats now work on every
field everywhere.
extractTextFromContentParts previously required a recognized `type` field,
which dropped older Anthropic-style [{text: '...'}] arrays. Treat untyped
items with `text` or `content` as text parts.
| const userMessage = messages.findLast(m => m.role === 'user'); | ||
| if (!userMessage || typeof userMessage.content !== 'string') { | ||
| return null; | ||
| } | ||
| return userMessage.content; |
There was a problem hiding this comment.
Bug: The parseUserContent function drops user messages if normalizeToMessages returns content as an object, due to a strict typeof userMessage.content !== 'string' check.
Severity: MEDIUM
Suggested Fix
Update parseUserContent to handle non-string content. Instead of returning null if userMessage.content is not a string, attempt to extract a string representation from the object, similar to how other parts of the normalizer handle complex content. This would align the function's behavior with the normalizer's capability to process varied message shapes.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's
not valid.
Location: static/app/views/explore/conversations/utils/conversationMessages.ts#L250-L254
Potential issue: The `parseUserContent` function in `conversationMessages.ts`
incorrectly drops user messages if their content is a JSON object rather than a string.
The new `normalizeToMessages` function can produce a message with object-based content,
but `parseUserContent` contains a strict check `typeof userMessage.content !== 'string'`
which then evaluates to true, causing the function to return `null` and silently discard
the message. This occurs despite the normalizer's goal to handle various data shapes,
creating a mismatch between what the normalizer produces and what the consumer function
accepts.
| function looksLikeJson(raw: string): boolean { | ||
| const trimmed = raw.trim(); | ||
| if (!trimmed) { | ||
| return false; | ||
| } | ||
| const first = trimmed[0]; | ||
| return first === '[' || first === '{' || first === '"'; | ||
| } |
There was a problem hiding this comment.
Bug: Plain text attributes starting with a quote (") are incorrectly treated as JSON. If parsing fails, the content is silently dropped instead of being treated as a string.
Severity: HIGH
Suggested Fix
Modify parseAndDetect to handle JSON parsing failures more gracefully. When parseJsonWithFix returns null (indicating a failed parse), the function should fall back to treating the original raw input as a plain string, wrapping it in a message object. This prevents data loss for valid strings that happen to start with a quote.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's
not valid.
Location: static/app/views/insights/pages/agents/utils/aiMessageNormalizer.ts#L327-L334
Potential issue: In `aiMessageNormalizer.ts`, the `parseAndDetect` function incorrectly
handles plain text attributes that start with a double-quote character (`"`). The
`looksLikeJson` check greedily identifies such strings as JSON. If the string is not
valid JSON (e.g., contains an unescaped quote like `"It's a bug"`), the subsequent
parsing and fixing logic fails. Instead of falling back to treating the input as a plain
string, the function returns an empty message array, causing the content to be silently
dropped. This affects realistic inputs like quoted sentences or markdown headers.

Unify per-attribute transforms behind a single normalizer so any supported shape (parts, content,
{messages}wrapper,{system, prompt}, plain string) works on everygen_ai.*attribute. Outputs filter torole=assistantwhen roles exist, else last-wins.