Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 3 additions & 6 deletions .env.sample
Original file line number Diff line number Diff line change
@@ -1,14 +1,11 @@
# API_HOST can be either azure, openai, or github:
# API_HOST can be either azure or openai:
API_HOST=azure
# Configure for Azure:
AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com
AZURE_OPENAI_CHAT_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME
# Configure for OpenAI.com:
OPENAI_API_KEY=YOUR-OPENAI-KEY
OPENAI_MODEL=gpt-3.5-turbo
# Configure for GitHub models: (GITHUB_TOKEN already exists inside Codespaces)
GITHUB_MODEL=gpt-4.1-mini
GITHUB_TOKEN=YOUR-GITHUB-PERSONAL-ACCESS-TOKEN
OPENAI_MODEL=gpt-5.4
# Configure for Redis (used by agent_history_redis.py, defaults to dev container Redis):
REDIS_URL=redis://localhost:6379
# Configure OTLP exporter (not needed in devcontainer, which sets these via docker-compose):
Expand All @@ -21,5 +18,5 @@ APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=YOUR-KEY;IngestionEndpo
# Configure for Azure AI Search (used by agent_knowledge_aisearch.py):
AZURE_SEARCH_ENDPOINT=https://YOUR-SEARCH-SERVICE.search.windows.net
AZURE_SEARCH_KNOWLEDGE_BASE_NAME=YOUR-KB-NAME
# Optional: Set to log evaluation results to Azure AI Foundry for rich visualization
# Optional: Set to log evaluation results to Microsoft Foundry for rich visualization
AZURE_AI_PROJECT=https://YOUR-ACCOUNT.services.ai.azure.com/api/projects/YOUR-PROJECT
2 changes: 1 addition & 1 deletion .github/prompts/update_translations.prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ description: Use this prompt to update the Spanish translations in the repo.
model: GPT-5.2 (copilot)
---

Update the Spanish translations in the repo according to the guidelines in AGENTS.md. Ensure there are spanish equivalents of each english example. Make sure to keep the translations consistent with the original content and maintain the technical accuracy of the code.
Update the Spanish translations in the repo according to the guidelines in AGENTS.md. Ensure there are spanish equivalents of each english example. Make sure to keep the translations consistent with the original content and maintain the technical accuracy of the code.
93 changes: 92 additions & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The agent-framework GitHub repo is here:
https://github.com/microsoft/agent-framework
It contains both Python and .NET agent framework code, but we are only using the Python packages in this repo.

MAF is changing rapidly still, so we sometimes need to check the repo changelog and issues to see if there are any breaking changes that might affect our code.
MAF is changing rapidly still, so we sometimes need to check the repo changelog and issues to see if there are any breaking changes that might affect our code.
The Python changelog is here:
https://github.com/microsoft/agent-framework/blob/main/python/CHANGELOG.md

Expand Down Expand Up @@ -92,3 +92,94 @@ def _on_response_with_body(self, request, response):

HttpLoggingPolicy.on_response = _on_response_with_body
```

## Manual test plan

After upgrading dependencies or making changes across examples, use this plan to verify everything works. Run each example with `uv run python examples/<file>.py`.

### No extra setup (Azure OpenAI only)

These work with just `API_HOST=azure` and the standard `.env` from `azd up`:

| Examples | Notes |
|----------|-------|
| `agent_basic.py` | Interactive chat loop |
| `agent_tool.py`, `agent_tools.py` | Tool calling |
| `agent_session.py` | Session persistence |
| `agent_with_subagent.py`, `agent_without_subagent.py` | Sub-agent patterns |
| `agent_supervisor.py` | Supervisor pattern |
| `agent_middleware.py` | Middleware pipeline |
| `agent_summarization.py` | Summarization middleware |
| `agent_tool_approval.py` | Tool approval |
| `workflow_agents.py`, `workflow_agents_sequential.py`, `workflow_agents_concurrent.py`, `workflow_agents_streaming.py` | Basic workflows |
| `workflow_conditional.py`, `workflow_conditional_state.py`, `workflow_conditional_state_isolated.py`, `workflow_conditional_structured.py` | Conditional workflows |
| `workflow_switch_case.py` | Switch/case workflow |
| `workflow_converge.py`, `workflow_fan_out_fan_in_edges.py` | Converge / fan-out patterns |
| `workflow_aggregator_ranked.py`, `workflow_aggregator_structured.py`, `workflow_aggregator_summary.py`, `workflow_aggregator_voting.py` | Aggregator workflows |
| `workflow_multi_selection_edge_group.py` | Multi-selection edges |
| `workflow_handoffbuilder.py`, `workflow_handoffbuilder_rules.py` | Handoff builder |
| `workflow_hitl_handoff.py`, `workflow_hitl_requests.py`, `workflow_hitl_requests_structured.py`, `workflow_hitl_tool_approval.py` | HITL workflows |
| `workflow_hitl_checkpoint.py` | HITL with file-based checkpoints |
| `agent_knowledge_sqlite.py` | SQLite knowledge provider |
| `agent_history_sqlite.py` | SQLite history provider (no tools — see [agent-framework#3295](https://github.com/microsoft/agent-framework/issues/3295)) |
| `agent_memory_mem0.py` | Mem0 memory provider |

### Requires Redis (dev container)

Redis runs automatically in the dev container at `redis://redis:6379`.

| Examples | Notes |
|----------|-------|
| `agent_history_redis.py` | Redis history provider (no tools — see [agent-framework#3295](https://github.com/microsoft/agent-framework/issues/3295)) |
| `agent_memory_redis.py` | Redis memory provider |

### Requires PostgreSQL (dev container)

PostgreSQL runs automatically in the dev container at `postgresql://admin:LocalPasswordOnly@db:5432/postgres`.

| Examples | Notes |
|----------|-------|
| `agent_knowledge_pg.py` | PG + pgvector knowledge |
| `agent_knowledge_pg_rewrite.py` | PG knowledge with query rewrite |
| `agent_knowledge_postgres.py` | PG knowledge (alternative) |
| `workflow_hitl_checkpoint_pg.py` | HITL with PG-backed checkpoints |

### Requires Azure AI Search

Needs `AZURE_SEARCH_ENDPOINT` and `AZURE_SEARCH_KNOWLEDGE_BASE_NAME` in `.env`.

| Examples | Notes |
|----------|-------|
| `agent_knowledge_aisearch.py` | Azure AI Search knowledge base (agentic mode) |

### Requires MCP server

Start the MCP server first: `uv run python examples/mcp_server.py`

| Examples | Notes |
|----------|-------|
| `agent_mcp_local.py` | Local MCP server (stdio) |
| `agent_mcp_remote.py` | Remote MCP server (SSE) |

### Requires OTel / Aspire

| Examples | Notes |
|----------|-------|
| `agent_otel_aspire.py` | Aspire dashboard (runs in dev container at `http://aspire-dashboard:18888`) |
| `agent_otel_appinsights.py` | Needs `APPLICATIONINSIGHTS_CONNECTION_STRING` in `.env` |

### Slow-running examples (⏱ 2–10 minutes)

These take significantly longer than other examples:

| Examples | Notes |
|----------|-------|
| `agent_evaluation.py` | Runs agent + evaluators inline. ~2–3 min. |
| `agent_evaluation_generate.py` | Generates eval data JSONL. ~2 min. |
| `agent_evaluation_batch.py` | Batch evaluators on JSONL. ~3–5 min. Needs `eval_data.jsonl` from `agent_evaluation_generate.py`. |
| `agent_redteam.py` | Red team attack simulation. ~5–10 min. |
| `workflow_magenticone.py` | Multi-agent MagenticOne orchestration. ~2–5 min. |

### Spanish examples

Spanish files under `examples/spanish/` mirror the English examples exactly (same code, translated strings). After changes, spot-check 3–5 Spanish files to confirm they run correctly.
41 changes: 8 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<!--
---
name: Python Agent Framework Demos
description: Collection of Python examples for Microsoft Agent Framework using GitHub Models or Azure AI Foundry.
description: Collection of Python examples for Microsoft Agent Framework using Microsoft Foundry.
languages:
Comment thread
pamelafox marked this conversation as resolved.
- python
products:
Expand All @@ -17,15 +17,14 @@ urlFragment: python-agentframework-demos
[![Open in GitHub Codespaces](https://img.shields.io/static/v1?style=for-the-badge&label=GitHub+Codespaces&message=Open&color=brightgreen&logo=github)](https://codespaces.new/Azure-Samples/python-agentframework-demos)
[![Open in Dev Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/Azure-Samples/python-agentframework-demos)

This repository provides examples of [Microsoft Agent Framework](https://learn.microsoft.com/agent-framework/) using LLMs from [GitHub Models](https://github.com/marketplace/models), [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/), or other model providers. GitHub Models are free to use for anyone with a GitHub account, up to a [daily rate limit](https://docs.github.com/github-models/prototyping-with-ai-models#rate-limits).
This repository provides examples of [Microsoft Agent Framework](https://learn.microsoft.com/agent-framework/) using LLMs from [Microsoft Foundry](https://learn.microsoft.com/azure/ai-foundry/) or other model providers.

* [Getting started](#getting-started)
* [GitHub Codespaces](#github-codespaces)
* [VS Code Dev Containers](#vs-code-dev-containers)
* [Local environment](#local-environment)
* [Configuring model providers](#configuring-model-providers)
* [Using GitHub Models](#using-github-models)
* [Using Azure AI Foundry models](#using-azure-ai-foundry-models)
* [Using Microsoft Foundry models](#using-microsoft-foundry-models)
* [Using OpenAI.com models](#using-openaicom-models)
* [Running the Python examples](#running-the-python-examples)
* [Resources](#resources)
Expand Down Expand Up @@ -95,35 +94,11 @@ The dev container includes a Redis server, which is used by the `agent_history_r

## Configuring model providers

These examples can be run with Azure AI Foundry, OpenAI.com, or GitHub Models, depending on the environment variables you set. All the scripts reference the environment variables from a `.env` file, and an example `.env.sample` file is provided. Host-specific instructions are below.
These examples can be run with Microsoft Foundry or OpenAI.com, depending on the environment variables you set. All the scripts reference the environment variables from a `.env` file, and an example `.env.sample` file is provided. Host-specific instructions are below.

## Using GitHub Models
## Using Microsoft Foundry models

If you open this repository in GitHub Codespaces, you can run the scripts for free using GitHub Models without any additional steps, as your `GITHUB_TOKEN` is already configured in the Codespaces environment.

If you want to run the scripts locally, you need to set up the `GITHUB_TOKEN` environment variable with a GitHub personal access token (PAT). You can create a PAT by following these steps:

1. Go to your GitHub account settings.
2. Click on "Developer settings" in the left sidebar.
3. Click on "Personal access tokens" in the left sidebar.
4. Click on "Tokens (classic)" or "Fine-grained tokens" depending on your preference.
5. Click on "Generate new token".
6. Give your token a name and select the scopes you want to grant. For this project, you don't need any specific scopes.
7. Click on "Generate token".
8. Copy the generated token.
9. Set the `GITHUB_TOKEN` environment variable in your terminal or IDE:

```shell
export GITHUB_TOKEN=your_personal_access_token
```

10. Optionally, you can use a model other than "gpt-4.1-mini" by setting the `GITHUB_MODEL` environment variable. Use a model that supports function calling, such as: `gpt-5`, `gpt-4.1-mini`, `gpt-4o`, `gpt-4o-mini`, `o3-mini`, `AI21-Jamba-1.5-Large`, `AI21-Jamba-1.5-Mini`, `Codestral-2501`, `Cohere-command-r`, `Ministral-3B`, `Mistral-Large-2411`, `Mistral-Nemo`, `Mistral-small`

## Using Azure AI Foundry models

You can run all examples in this repository using GitHub Models. If you want to run the examples using models from Azure AI Foundry instead, you need to provision the Azure AI resources, which will incur costs.

This project includes infrastructure as code (IaC) to provision Azure OpenAI deployments of "gpt-4.1-mini" and "text-embedding-3-large" via Azure AI Foundry. The IaC is defined in the `infra` directory and uses the Azure Developer CLI to provision the resources.
This project includes infrastructure as code (IaC) to provision Azure OpenAI deployments of "gpt-5.4" and "text-embedding-3-large" via Microsoft Foundry. The IaC is defined in the `infra` directory and uses the Azure Developer CLI to provision the resources.

1. Make sure the [Azure Developer CLI (azd)](https://aka.ms/install-azd) is installed.

Expand Down Expand Up @@ -233,7 +208,7 @@ You can run the examples in this repository by executing the scripts in the `exa
| [agent_otel_aspire.py](examples/agent_otel_aspire.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to the [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
| [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requires Azure provisioning via `azd provision`. |
| [agent_evaluation_generate.py](examples/agent_evaluation_generate.py) | Generate synthetic evaluation data for the travel planner agent. |
| [agent_evaluation.py](examples/agent_evaluation.py) | Evaluate a travel planner agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) agent evaluators (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Optionally set `AZURE_AI_PROJECT` in `.env` to log results to [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
| [agent_evaluation.py](examples/agent_evaluation.py) | Evaluate a travel planner agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) agent evaluators (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Optionally set `AZURE_AI_PROJECT` in `.env` to log results to [Microsoft Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
| [agent_evaluation_batch.py](examples/agent_evaluation_batch.py) | Batch evaluation of agent responses using Azure AI Evaluation's `evaluate()` function. |
| [agent_redteam.py](examples/agent_redteam.py) | Red-team a financial advisor agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/red-teaming-agent) to test resilience against adversarial attacks across risk categories (Violence, HateUnfairness, Sexual, SelfHarm). Requires `AZURE_AI_PROJECT` in `.env`. |

Expand Down Expand Up @@ -304,7 +279,7 @@ This example requires an `APPLICATIONINSIGHTS_CONNECTION_STRING` environment var

**Option A: Automatic via `azd provision`**

If you run `azd provision` (see [Using Azure AI Foundry models](#using-azure-ai-foundry-models)), the Application Insights resource is provisioned automatically and the connection string is written to your `.env` file.
If you run `azd provision` (see [Using Microsoft Foundry models](#using-microsoft-foundry-models)), the Application Insights resource is provisioned automatically and the connection string is written to your `.env` file.

**Option B: Manual from the Azure Portal**

Expand Down
12 changes: 3 additions & 9 deletions examples/agent_basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

# Configure OpenAI client based on environment
load_dotenv(override=True)
API_HOST = os.getenv("API_HOST", "github")
API_HOST = os.getenv("API_HOST", "azure")

async_credential = None
if API_HOST == "azure":
Expand All @@ -18,17 +18,11 @@
client = OpenAIChatClient(
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
api_key=token_provider,
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
)
elif API_HOST == "github":
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
model=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
)
else:
client = OpenAIChatClient(
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4.1-mini")
api_key=os.environ["OPENAI_API_KEY"], model=os.environ.get("OPENAI_MODEL", "gpt-5.4")
)

agent = Agent(client=client, instructions="You're an informational agent. Answer questions cheerfully.")
Expand Down
24 changes: 5 additions & 19 deletions examples/agent_evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
logger.setLevel(logging.INFO)

load_dotenv(override=True)
API_HOST = os.getenv("API_HOST", "github")
API_HOST = os.getenv("API_HOST", "azure")

async_credential = None
if API_HOST == "azure":
Expand All @@ -37,33 +37,21 @@
client = OpenAIChatClient(
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
api_key=token_provider,
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
model=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
)
eval_model_config = AzureOpenAIModelConfiguration(
type="azure_openai",
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
)
elif API_HOST == "github":
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
eval_model_config = OpenAIModelConfiguration(
type="openai",
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model="openai/gpt-4.1-mini",
)
else:
client = OpenAIChatClient(
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4.1-mini")
api_key=os.environ["OPENAI_API_KEY"], model=os.environ.get("OPENAI_MODEL", "gpt-5.4")
)
eval_model_config = OpenAIModelConfiguration(
type="openai",
api_key=os.environ["OPENAI_API_KEY"],
model=os.environ.get("OPENAI_MODEL", "gpt-4.1-mini"),
model=os.environ.get("OPENAI_MODEL", "gpt-5.4"),
)


Expand Down Expand Up @@ -298,9 +286,7 @@ async def main():

intent_result = intent_evaluator(query=eval_query, response=eval_response, tool_definitions=tool_definitions)
completeness_result = completeness_evaluator(response=response.text, ground_truth=ground_truth)
adherence_result = adherence_evaluator(
query=eval_query, response=eval_response, tool_definitions=tool_definitions
)
adherence_result = adherence_evaluator(query=eval_query, response=eval_response, tool_definitions=tool_definitions)
tool_accuracy_result = tool_accuracy_evaluator(
query=eval_query, response=eval_response, tool_definitions=tool_definitions
)
Expand Down
Loading