Skip to content

A2A adapter follow-ups post #2389 (arch hygiene + testability) #2391

@malinskibeniamin

Description

@malinskibeniamin

Context

Follow-up to #2389 (A2A + AI SDK v6 + streamdown v2 + AI Elements re-sync). The architectural analysis done on that PR surfaced several improvements that are out of scope for the bump itself but should be addressed in a dedicated pass. Grouped by effort; each item has file:line references.

Critical-path hygiene (target first)

1. Extract pure stream-mapper from doStream

Files: frontend/src/components/pages/agents/details/a2a/a2a-chat-language-model.tsx

doStream (≈120 lines around L260–378) mixes transport setup, capability detection, non-streaming fallback, first-chunk metadata, and event-to-v2-stream-part mapping in one TransformStream. Extract a pure a2aEventToV2StreamParts(event, state): { parts: LanguageModelV2StreamPart[]; state } into a2a-stream-mapper.ts. Zero behavior change; unlocks unit tests on every TaskStatusUpdateEvent.status.state branch.

2. Finish or remove simulatedStream non-streaming branch

Files: a2a-chat-language-model.tsx:291-306

When clientCard.capabilities.streaming === false the adapter builds a simulatedStream but the error branch is a bare // FIXME: error and the subsequent sendMessageStream(streamParams) re-sends the request unconditionally. Decide: either complete the non-streaming path (error + skip real stream) or delete the simulatedStream block.

3. Split StreamingState and narrow activeTextBlock type

Files: frontend/src/components/pages/agents/details/a2a/chat/hooks/streaming-types.ts:75-122, event-handlers.ts:375-394

StreamingState has six active fields + three @deprecated legacy fields; activeTextBlock is typed ContentBlock | null but actually stores artifact blocks too, requiring a type guard in every consumer. Delete the three deprecated fields and narrow activeTextBlock to Extract<ContentBlock, { type: 'artifact' }> | null (+ rename to activeStreamingBlock).

Testability / isolation

4. Extract parseA2AError into its own module

Files: frontend/src/components/pages/agents/details/a2a/chat/hooks/use-message-streaming.ts:49-97

parseA2AError reverse-engineers JSON-RPC errors out of Error.message strings using 5 regexes because @a2a-js/sdk drops structured code/message/data into text. Move to chat/utils/parse-a2a-error.ts with table-driven tests. Also file an upstream @a2a-js/sdk issue asking for structured errors.

5. Hoist createAuthenticatedFetch

Files: a2a-chat-language-model.tsx:202-212 and :265-273, chat/utils/a2a-client.ts:21-30

Three copies of the same fetch-with-auth-header closure; the streaming path adds X-Redpanda-Stream-Tokens: true. Hoist to createAuthenticatedFetch(jwt, extraHeaders?) in a2a-client.ts.

Type-safety and runtime robustness

6. Replace Buffer with Uint8Array in browser code

Files: a2a-chat-language-model.tsx:414, 483

Buffer.from(...) relies on a Node shim polyfill. LanguageModelV2File.data accepts Uint8Array directly — no polyfill needed. A bundler regression would currently produce a runtime Buffer is not defined.

7. Add assertNever on stream-chunk switch

Files: use-message-streaming.ts, a2a-chat-language-model.tsx TransformStream.transform

v6 added source, reasoning-delta, tool-input-start/delta/end stream parts; our outer hook's switch doesn't handle them and silently falls through. Add default: assertNever(chunk) so new v6 chunk kinds fail loudly.

8. Throw UnsupportedFunctionalityError on data parts

Files: a2a-chat-language-model.tsx:426-428 (convertProviderPartToContent)

Currently silently drops part.kind === 'data' in doGenerate. Either throw UnsupportedFunctionalityError to make the gap explicit or delete the whole doGenerate method (unused; streamText drives everything).

Provider-shape refactor (biggest lever, separate PR)

9. Move A2A adapter to transport + pure mapper + thin glue

Per interface-design analysis on #2389: separate A2aTransport (SSE/JSON-RPC plumbing, fake-able in tests) from a2aEventToV2StreamParts (pure reducer, trivially testable) and wire them through a ~8-line LanguageModelV2 object. Cheapest path to V3 forward-compat; no call-site churn beyond a2a-provider.ts:40. Depends on #1 landing first.

Out of scope / won't do here

  • @modelcontextprotocol/sdk bump — its own PR, not mixed with this work.
  • prompt-input.tsx re-sync — heavy divergence, needs its own focused pass.
  • Adopting upstream ai-elements Attachments, ChainOfThought, Reasoning, PromptInputActionAddScreenshot — do when the AI Agents feature actually wires them.

Suggested order

  1. Merge chore(frontend): bump A2A, AI SDK v6, streamdown v2; re-sync AI Elements #2389 first.
  2. Tackle Add readiness & liveness check #1, Create helm chart #3 together (both in the adapter / streaming-state area) with a new test file covering the extracted mapper.
  3. Then Add prometheus metrics #2, Add build pipeline #7, Prepare for Avro and XML content #8 as a single "adapter correctness" PR.
  4. Then Add dark theme  #4, Add getting started documentation #5, Check list message performance (response time) #6 as "adapter hygiene".
  5. Finally Support for Avro #9 once the safety net from Add readiness & liveness check #1 is in place.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions