A provider module that integrates Azure OpenAI Service with the Amplifier AI agent platform.
This module enables Amplifier to use Azure OpenAI Service deployments for language model reasoning via the Responses API. Requests are routed through your Azure endpoint with Azure-specific authentication and deployment configuration.
- Azure OpenAI Service Integration: Connect to your Azure-hosted OpenAI deployments
- Responses API Compatibility: Routes requests through Azure's Responses API endpoint
- Deployment Name Mapping: Map model names to Azure deployment names
- Multiple Authentication Methods: Support for API keys, Azure AD tokens, and Managed Identity
- Tool Calling Support: Full support for function calling/tools
- Managed Identity Support: Seamless authentication in Azure environments
The provider recognises Responses API function_call / tool_call
payloads, decodes any JSON-encoded arguments, and forwards standard
ToolCall objects to Amplifier. No additional configuration is needed—tools
declared in your configuration or profiles run as soon as the model
requests them.
This provider inherits graceful degradation from the OpenAI base provider:
- Automatic repair of missing tool results in conversation history
- Visible failures via
[SYSTEM ERROR]messages in synthetic results - Session continuity even when context bugs occur
- Observability via
provider:tool_sequence_repairedevents
See OpenAI provider documentation for detailed explanation of the graceful degradation pattern.
- Python 3.11+
- UV - Fast Python package manager
# macOS/Linux/WSL
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"Install the module using pip:
uv pip install -e amplifier-module-provider-azure-openaiFor Managed Identity authentication support, install with the azure extra:
uv pip install -e amplifier-module-provider-azure-openai[azure]Or add it to your Amplifier configuration for automatic installation.
The simplest way to configure the provider is with environment variables.
Use DefaultAzureCredential with Azure CLI:
# Login to Azure
az login
# Set up endpoint
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"
export AZURE_USE_DEFAULT_CREDENTIAL="true"
# Optional: Configure API version and defaults
export AZURE_OPENAI_API_VERSION="2024-10-01-preview"
export AZURE_OPENAI_DEFAULT_MODEL="gpt-5.1"
# Run amplifier - no config file needed!
amplifier run# Set up authentication
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"
export AZURE_OPENAI_API_KEY="your-api-key-here"
# Optional: Configure API version and defaults
export AZURE_OPENAI_API_VERSION="2024-10-01-preview"
export AZURE_OPENAI_DEFAULT_MODEL="gpt-5.1"
# Run amplifier
amplifier runFor managed identity in Azure environments (VMs, App Service, etc.):
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"
export AZURE_USE_DEFAULT_CREDENTIAL="true" # Will use managed identity when availableBasic setup with API key authentication:
[[providers]]
name = "azure-openai"
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
api_key = "your-api-key-here"
default_model = "gpt-5.1"Full configuration with deployment mapping:
[[providers]]
name = "azure-openai"
[providers.config]
# Required: Your Azure OpenAI resource endpoint
azure_endpoint = "https://myresource.openai.azure.com"
# Authentication (use one of these, in order of priority)
api_key = "your-api-key-here" # Option 1: API Key
# azure_ad_token = "your-azure-ad-token" # Option 2: Azure AD Token
# use_managed_identity = true # Option 3: Managed Identity
# use_default_credential = true # Option 4: DefaultAzureCredential
# For user-assigned managed identity (optional)
# managed_identity_client_id = "client-id-here"
# Optional: API version (defaults to 2024-02-15-preview)
api_version = "2024-10-01-preview"
# Optional: Map model names to Azure deployment names
[providers.config.deployment_mapping]
"gpt-5.1" = "my-gpt5-deployment"
"gpt-5.1" = "my-gpt5-deployment"
"gpt-5-mini" = "my-mini-deployment"
# Optional: Default deployment when no mapping matches
default_deployment = "my-default-deployment"
# Optional: Default model for requests
default_model = "gpt-5.1"
# Optional: Generation parameters
max_tokens = 4096
temperature = 0.7
# Optional: Debug configuration
debug = false # Enable standard debug events
raw_debug = false # Enable ultra-verbose raw API I/O loggingStandard Debug (debug: true):
- Emits
llm:request:debugandllm:response:debugevents - Contains request/response summaries with message counts, model info, usage stats
- Moderate log volume, suitable for development
Raw Debug (debug: true, raw_debug: true):
- Emits
llm:request:rawandllm:response:rawevents - Contains complete, unmodified request params and response objects
- Extreme log volume, use only for deep provider integration debugging
- Captures the exact data sent to/from Azure OpenAI API before any processing
Example:
providers:
- module: provider-azure-openai
config:
debug: true # Enable debug events
raw_debug: true # Enable raw API I/O capture
azure_endpoint: https://myresource.openai.azure.com
api_key: ${AZURE_OPENAI_API_KEY}Azure OpenAI uses deployment names instead of model names. This module provides flexible mapping:
- Explicit Mapping: Check
deployment_mappingfor the requested model - Default Deployment: Use
default_deploymentif configured - Pass-through: Use the model name as-is (assumes it's a deployment name)
[providers.config.deployment_mapping]
"gpt-5.1" = "production-gpt5"
"gpt-5-mini" = "fast-mini"
default_deployment = "fallback-deployment"- Request for "gpt-5.1" → Uses "production-gpt5"
- Request for "gpt-5-mini" → Uses "fast-mini"
- Request for "claude-opus-4-1" → Uses "fallback-deployment" (not in mapping)
- Request for "my-custom-deploy" → Uses "my-custom-deploy" (if no default set)
The provider supports multiple authentication methods with the following priority:
- API Key (highest priority)
- Azure AD Token
- Managed Identity / Azure Credentials
The most common method. Set via configuration or environment variable:
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="https://myresource.openai.azure.com"[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
api_key = "your-api-key"For enterprise scenarios with Azure Active Directory:
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
azure_ad_token = "your-azure-ad-token"Or via environment:
export AZURE_OPENAI_AD_TOKEN="your-ad-token"Note: This requires the azure-identity package to be installed.
For Azure resources with system-assigned managed identity:
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
use_managed_identity = trueFor Azure resources with user-assigned managed identity:
[providers.config]
azure_endpoint = "https://myresource.openai.azure.com"
use_managed_identity = true
managed_identity_client_id = "your-managed-identity-client-id"This is the recommended authentication method as it works in both local development and Azure deployments.
Uses Azure's credential chain (includes Azure CLI, environment variables, managed identity):
[provider.config]
azure_endpoint = "https://myresource.openai.azure.com"
use_default_credential = trueOr via environment variable:
export AZURE_USE_DEFAULT_CREDENTIAL=trueThe DefaultAzureCredential tries multiple authentication methods in order:
- Environment variables (AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID)
- Managed identity (if running in Azure)
- Azure CLI (if logged in locally with
az login) - Azure PowerShell (if logged in)
- Interactive browser authentication
Note for WSL2 Users: If you're developing in WSL2, use DefaultAzureCredential instead of ManagedIdentityCredential directly, as WSL2 doesn't have access to Azure IMDS. After running az login, DefaultAzureCredential will automatically use your CLI credentials.
The module supports these environment variables as fallbacks:
AZURE_OPENAI_ENDPOINTorAZURE_OPENAI_BASE_URL- Azure OpenAI resource endpointAZURE_OPENAI_API_KEYorAZURE_OPENAI_KEY- API key for authenticationAZURE_OPENAI_AD_TOKEN- Azure AD token for authenticationAZURE_OPENAI_API_VERSION- API version (defaults to2024-02-15-preview)
AZURE_USE_MANAGED_IDENTITY- Enable managed identity authentication (true,1, oryes)AZURE_USE_DEFAULT_CREDENTIAL- Enable DefaultAzureCredential authentication (true,1, oryes)AZURE_MANAGED_IDENTITY_CLIENT_ID- Client ID for user-assigned managed identity
AZURE_OPENAI_DEFAULT_DEPLOYMENT- Default deployment name to use when no mapping matchesAZURE_OPENAI_DEFAULT_MODEL- Default model to use for requests (defaults togpt-5.1)
AZURE_OPENAI_MAX_OUTPUT_TOKENS- Maximum output tokens (defaults to4096)AZURE_OPENAI_TEMPERATURE- Temperature for generation (defaults to0.7)
Note: Configuration file values take precedence over environment variables.
Once configured, the Azure OpenAI provider works seamlessly with Amplifier:
# In your Amplifier session
response = await session.send_message(
"Hello, how are you?",
provider="azure-openai",
model="gpt-5.1" # Will be mapped to your Azure deployment
)The module defaults to API version 2024-02-15-preview. You can override this:
[provider.config]
api_version = "2024-10-01-preview" # Use a newer versionAPI versions 2024-08-01-preview and later introduce parameter changes:
- Use
max_output_tokens(Azure) /max_completion_tokens(OpenAI) instead ofmax_tokens - The provider automatically handles this translation for you
Model-Specific Restrictions: Some models (e.g., GPT-5 and later) have specific parameter requirements:
- May only support default temperature values
- Check Azure OpenAI documentation for your specific model's capabilities
Example config for newer models:
[provider.config]
api_version = "2025-03-01-preview"
default_model = "gpt-5.1"
temperature = 1.0 # Use model's default temperatureCheck Azure OpenAI documentation for available API versions and model capabilities.
The provider fully supports Responses API tool/function calling:
tools = [
{
"name": "get_weather",
"description": "Get the weather in a location",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
]
response = await session.send_message(
"What's the weather in Seattle?",
provider="azure-openai",
tools=tools
)-
Authentication Errors
- Verify your API key or Azure AD token is correct
- Check that the endpoint URL includes
https://and ends with.openai.azure.com
-
Deployment Not Found
- Ensure the deployment name exists in your Azure OpenAI resource
- Check deployment mapping configuration
- Verify the deployment is in a "Succeeded" state
-
API Version Errors
- Some features may require specific API versions
- Try using the default version or check Azure documentation
-
Rate Limiting
- Azure OpenAI has deployment-specific rate limits
- Consider implementing retry logic or using multiple deployments
Enable debug logging to see deployment resolution:
import logging
logging.getLogger("amplifier_module_provider_azure_openai").setLevel(logging.DEBUG)- Endpoints: Uses Azure resource endpoints instead of api.openai.com
- Authentication: Supports Azure-specific auth methods
- Deployments: References deployment names instead of model names directly
- Rate Limits: Azure-specific quotas per deployment
- Regional Availability: Limited to Azure regions with OpenAI service
Note
This project is not currently accepting external contributions, but we're actively working toward opening this up. We value community input and look forward to collaborating in the future. For now, feel free to fork and experiment!
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit Contributor License Agreements.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.