You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Follow-up to #2389 (A2A + AI SDK v6 + streamdown v2 + AI Elements re-sync). The architectural analysis done on that PR surfaced several improvements that are out of scope for the bump itself but should be addressed in a dedicated pass. Grouped by effort; each item has file:line references.
doStream (≈120 lines around L260–378) mixes transport setup, capability detection, non-streaming fallback, first-chunk metadata, and event-to-v2-stream-part mapping in one TransformStream. Extract a pure a2aEventToV2StreamParts(event, state): { parts: LanguageModelV2StreamPart[]; state } into a2a-stream-mapper.ts. Zero behavior change; unlocks unit tests on every TaskStatusUpdateEvent.status.state branch.
2. Finish or remove simulatedStream non-streaming branch
Files:a2a-chat-language-model.tsx:291-306
When clientCard.capabilities.streaming === false the adapter builds a simulatedStream but the error branch is a bare // FIXME: error and the subsequent sendMessageStream(streamParams) re-sends the request unconditionally. Decide: either complete the non-streaming path (error + skip real stream) or delete the simulatedStream block.
3. Split StreamingState and narrow activeTextBlock type
StreamingState has six active fields + three @deprecated legacy fields; activeTextBlock is typed ContentBlock | null but actually stores artifact blocks too, requiring a type guard in every consumer. Delete the three deprecated fields and narrow activeTextBlock to Extract<ContentBlock, { type: 'artifact' }> | null (+ rename to activeStreamingBlock).
parseA2AError reverse-engineers JSON-RPC errors out of Error.message strings using 5 regexes because @a2a-js/sdk drops structured code/message/data into text. Move to chat/utils/parse-a2a-error.ts with table-driven tests. Also file an upstream @a2a-js/sdk issue asking for structured errors.
5. Hoist createAuthenticatedFetch
Files:a2a-chat-language-model.tsx:202-212 and :265-273, chat/utils/a2a-client.ts:21-30
Three copies of the same fetch-with-auth-header closure; the streaming path adds X-Redpanda-Stream-Tokens: true. Hoist to createAuthenticatedFetch(jwt, extraHeaders?) in a2a-client.ts.
Type-safety and runtime robustness
6. Replace Buffer with Uint8Array in browser code
Files:a2a-chat-language-model.tsx:414, 483
Buffer.from(...) relies on a Node shim polyfill. LanguageModelV2File.data accepts Uint8Array directly — no polyfill needed. A bundler regression would currently produce a runtime Buffer is not defined.
Currently silently drops part.kind === 'data' in doGenerate. Either throw UnsupportedFunctionalityError to make the gap explicit or delete the whole doGenerate method (unused; streamText drives everything).
Provider-shape refactor (biggest lever, separate PR)
9. Move A2A adapter to transport + pure mapper + thin glue
Per interface-design analysis on #2389: separate A2aTransport (SSE/JSON-RPC plumbing, fake-able in tests) from a2aEventToV2StreamParts (pure reducer, trivially testable) and wire them through a ~8-line LanguageModelV2 object. Cheapest path to V3 forward-compat; no call-site churn beyond a2a-provider.ts:40. Depends on #1 landing first.
Out of scope / won't do here
@modelcontextprotocol/sdk bump — its own PR, not mixed with this work.
prompt-input.tsx re-sync — heavy divergence, needs its own focused pass.
Adopting upstream ai-elements Attachments, ChainOfThought, Reasoning, PromptInputActionAddScreenshot — do when the AI Agents feature actually wires them.
Context
Follow-up to #2389 (A2A + AI SDK v6 + streamdown v2 + AI Elements re-sync). The architectural analysis done on that PR surfaced several improvements that are out of scope for the bump itself but should be addressed in a dedicated pass. Grouped by effort; each item has file:line references.
Critical-path hygiene (target first)
1. Extract pure stream-mapper from
doStreamFiles:
frontend/src/components/pages/agents/details/a2a/a2a-chat-language-model.tsxdoStream(≈120 lines around L260–378) mixes transport setup, capability detection, non-streaming fallback, first-chunk metadata, and event-to-v2-stream-part mapping in oneTransformStream. Extract a purea2aEventToV2StreamParts(event, state): { parts: LanguageModelV2StreamPart[]; state }intoa2a-stream-mapper.ts. Zero behavior change; unlocks unit tests on everyTaskStatusUpdateEvent.status.statebranch.2. Finish or remove
simulatedStreamnon-streaming branchFiles:
a2a-chat-language-model.tsx:291-306When
clientCard.capabilities.streaming === falsethe adapter builds asimulatedStreambut the error branch is a bare// FIXME: errorand the subsequentsendMessageStream(streamParams)re-sends the request unconditionally. Decide: either complete the non-streaming path (error + skip real stream) or delete the simulatedStream block.3. Split
StreamingStateand narrowactiveTextBlocktypeFiles:
frontend/src/components/pages/agents/details/a2a/chat/hooks/streaming-types.ts:75-122,event-handlers.ts:375-394StreamingStatehas six active fields + three@deprecatedlegacy fields;activeTextBlockis typedContentBlock | nullbut actually stores artifact blocks too, requiring a type guard in every consumer. Delete the three deprecated fields and narrowactiveTextBlocktoExtract<ContentBlock, { type: 'artifact' }> | null(+ rename toactiveStreamingBlock).Testability / isolation
4. Extract
parseA2AErrorinto its own moduleFiles:
frontend/src/components/pages/agents/details/a2a/chat/hooks/use-message-streaming.ts:49-97parseA2AErrorreverse-engineers JSON-RPC errors out ofError.messagestrings using 5 regexes because@a2a-js/sdkdrops structuredcode/message/datainto text. Move tochat/utils/parse-a2a-error.tswith table-driven tests. Also file an upstream@a2a-js/sdkissue asking for structured errors.5. Hoist
createAuthenticatedFetchFiles:
a2a-chat-language-model.tsx:202-212and:265-273,chat/utils/a2a-client.ts:21-30Three copies of the same fetch-with-auth-header closure; the streaming path adds
X-Redpanda-Stream-Tokens: true. Hoist tocreateAuthenticatedFetch(jwt, extraHeaders?)ina2a-client.ts.Type-safety and runtime robustness
6. Replace
BufferwithUint8Arrayin browser codeFiles:
a2a-chat-language-model.tsx:414, 483Buffer.from(...)relies on a Node shim polyfill.LanguageModelV2File.dataacceptsUint8Arraydirectly — no polyfill needed. A bundler regression would currently produce a runtimeBuffer is not defined.7. Add
assertNeveron stream-chunk switchFiles:
use-message-streaming.ts,a2a-chat-language-model.tsxTransformStream.transformv6 added
source,reasoning-delta,tool-input-start/delta/endstream parts; our outer hook'sswitchdoesn't handle them and silently falls through. Adddefault: assertNever(chunk)so new v6 chunk kinds fail loudly.8. Throw
UnsupportedFunctionalityErrorondatapartsFiles:
a2a-chat-language-model.tsx:426-428(convertProviderPartToContent)Currently silently drops
part.kind === 'data'indoGenerate. Either throwUnsupportedFunctionalityErrorto make the gap explicit or delete the wholedoGeneratemethod (unused;streamTextdrives everything).Provider-shape refactor (biggest lever, separate PR)
9. Move A2A adapter to transport + pure mapper + thin glue
Per interface-design analysis on #2389: separate
A2aTransport(SSE/JSON-RPC plumbing, fake-able in tests) froma2aEventToV2StreamParts(pure reducer, trivially testable) and wire them through a ~8-lineLanguageModelV2object. Cheapest path to V3 forward-compat; no call-site churn beyonda2a-provider.ts:40. Depends on #1 landing first.Out of scope / won't do here
@modelcontextprotocol/sdkbump — its own PR, not mixed with this work.prompt-input.tsxre-sync — heavy divergence, needs its own focused pass.Attachments,ChainOfThought,Reasoning,PromptInputActionAddScreenshot— do when the AI Agents feature actually wires them.Suggested order