Skip to content

Python: [Bug]: AzureAIClient.configure_azure_monitor does not preserve a single Foundry trace #4855

@MatteoSava

Description

@MatteoSava

Description

In agent-framework-core==1.0.0rc5 and agent-framework-azure-ai==1.0.0rc5, the default Azure AI observability setup does not produce end-to-end trace continuity in Azure AI Foundry.

This is not a new regression only in rc5. We first observed and reproduced this during internal evaluation at my company in December 2025, and the same behavior is still present in python-1.0.0rc5 released on March 20, 2026.

The user-visible problem is that a single logical agent run that triggers multiple Responses API calls is split into multiple unrelated trace IDs in Azure AI Foundry instead of appearing as one coherent trace tree.

That makes Foundry observability effectively broken for this path:

  • one run does not appear as one trace
  • each Responses call looks like an independent server-side trace
  • debugging a single run across multiple model calls becomes much harder

This issue is specifically about the default non-streaming Azure AI path. Streaming appears to have a separate root cause.

Code Sample

Error Messages / Stack Traces

Package Versions

agent-framework-core: 1.0.0rc5, agent-framework-azure-ai: 1.0.0rc5

Python Version

Python 3.12

Additional Context

Likely root cause

AzureAIClient.configure_azure_monitor() instruments Azure Monitor and the Agent Framework's own spans, but it does not instrument the httpx transport that the OpenAI Python SDK uses internally. The relevant rc5 code paths are:

  • python/packages/azure-ai/agent_framework_azure_ai/_client.py#L242-L328
  • python/packages/azure-ai/agent_framework_azure_ai/_client.py#L683-L685

The effective flow today is:

  1. configure_azure_monitor() calls azure.monitor.opentelemetry.configure_azure_monitor(…)
  2. It then calls agent_framework.observability.enable_instrumentation(…)
  3. RawAzureAIClient._initialize_client() later creates the OpenAI client via self.project_client.get_openai_client()
  4. That client uses httpx.AsyncClient under the hood
  5. Because httpx is uninstrumented, outbound Responses requests never propagate the active trace context to Foundry

This is why local/in-process spans may still appear coherent while Foundry server-side traces are fragmented across separate trace IDs.

How we worked around it in our codebase

In our codebase, we restored non-streaming Foundry trace continuity by explicitly instrumenting httpx after the normal MAF / Azure Monitor setup.

Our workaround is effectively:

await client.configure_azure_monitor(...)

from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor

HTTPXClientInstrumentor().instrument()

This workaround solved the trace fragmentation for the non-streaming path in our repo, but it should not be required from every consumer when AzureAIClient.configure_azure_monitor() is presented as the supported observability setup path.

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingpythontriagev1.0Features being tracked for the version 1.0 GA

Type

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions