Skip to content

Streaming responses bypass RequestLoggingMiddleware (no entries in tx_aim_request_log for streamed chats) #7

@dkd-dobberkau

Description

@dkd-dobberkau

Summary

When using Ai::conversationStream(), no entries are written to tx_aim_request_log. The RequestLoggingMiddleware (priority -700) is never invoked for the streaming path, so chat-style integrations have no audit trail, no cost tracking, and the dashboard widgets show "No requests logged" even though Anthropic/OpenAI calls are happening.

Why it happens

Classes/Ai.php:193 documents the intended behavior:

Note: streaming bypasses the middleware pipeline for the response path.
The request is still resolved and validated, but logging and cost
tracking happen after the stream completes via the StreamChunkIterator's
onComplete callback.

But the onComplete callback is never wired. In Classes/Provider/SymfonyAi/SymfonyAiPlatformAdapter.php:162-165:

$streamIterator = new StreamChunkIterator(
    $result->asStream(),
    $request->configuration,
);   // third constructor argument $onComplete is omitted

StreamChunkIterator declares the parameter (Classes/Response/StreamChunkIterator.php:38):

public function __construct(
    private readonly \Generator $generator,
    private readonly ProviderConfiguration $configuration,
    private readonly ?\Closure $onComplete = null,
) {}

…and triggers it on completion (line 64-65), but since the adapter never passes one, nothing fires.

Reproduction

  1. Configure a provider (e.g. Anthropic Sonnet) and mark it as default.
  2. Call Ai::conversationStream(...) from any extension (e.g. via a chat UI/SSE).
  3. Confirm the request reaches the provider and content streams back to the client.
  4. Inspect tx_aim_request_log — empty.
  5. The "AI Request Log" backend module shows "No requests logged".

The non-streaming path (Ai::chat(), Ai::conversation()) logs correctly because the middleware pipeline runs synchronously and RequestLoggingMiddleware captures the response in its finally block.

Impact

  • No usage/cost analytics for streaming workloads (which are likely the majority of chat use cases).
  • Rate limiting via AccessControlMiddleware->countRecentRequestsByUser() is bypassed for streamed requests.
  • Privacy-level enforcement and budget caps don't apply to streaming.

Suggested direction

Wire an onComplete callback in SymfonyAiPlatformAdapter::processConversationRequest() (and any other streaming-capable adapter) that builds the same payload RequestLoggingMiddleware::logRequest() builds and calls RequestLogRepository::log() after the stream is exhausted. Cost and budget middlewares would need similar deferred hooks.

Happy to open a PR if a sketch is welcome.

Environment

  • b13/aim 0.1.0 (8be5fc8b)
  • TYPO3 v13.4.28
  • PHP 8.3
  • Provider: symfony/ai-anthropic-platform 0.7.x via Anthropic Sonnet 4.5

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions