diff --git a/samples/python/README.md b/samples/python/README.md index 4e0ff030..f4df94e1 100644 --- a/samples/python/README.md +++ b/samples/python/README.md @@ -10,6 +10,7 @@ |Copilot Studio Client|Console app to consume a Copilot Studio Agent|[copilotstudio-client](copilotstudio-client/README.md)| |Cards Agent|Agent that uses rich cards to enhance conversation design |[cards](cards/README.md)| |Copilot Studio Skill|Call the echo bot from a Copilot Studio skill |[copilotstudio-skill](copilotstudio-skill/README.md)| +|M365 AgentsSDK A2A Patterns|A2A protocol patterns (ping/stream/push) with Semantic Kernel orchestration and Google ADK integration|[m365-agents-sdk-a2a-patterns](m365-agents-sdk-a2a-patterns/README.md)| ## Important Notice - Import Changes diff --git a/samples/python/m365-agents-sdk-a2a-patterns/.gitignore b/samples/python/m365-agents-sdk-a2a-patterns/.gitignore new file mode 100644 index 00000000..a40ef139 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/.gitignore @@ -0,0 +1,83 @@ +# Python +__pycache__/ +*.py[cod] +*$py.class +*.so +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# Virtual Environments +.env +.venv +env/ +venv/ +ENV/ + +# IDEs +.vscode/ +.idea/ +*.swp +*.swo +*~ +.DS_Store + +# Jupyter Notebook +.ipynb_checkpoints + +# Poetry +poetry.lock + +# Debug files +*.html +*.log +screenshot_*.png +competitor_search_debug.* + +# ADK +.adk/ +agents_log/ + +# Selenium +selenium/ +geckodriver.log + +# Test coverage +htmlcov/ +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +.hypothesis/ +.pytest_cache/ + +# Secrets +*.pem +*.key +credentials.json +service-account.json + +# Cloud deployment artifacts +.gcloudignore + +# Temporary files +*.tmp +*.bak +*.swp diff --git a/samples/python/m365-agents-sdk-a2a-patterns/LICENSE b/samples/python/m365-agents-sdk-a2a-patterns/LICENSE new file mode 100644 index 00000000..9e841e7a --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/LICENSE @@ -0,0 +1,21 @@ + MIT License + + Copyright (c) Microsoft Corporation. + + Permission is hereby granted, free of charge, to any person obtaining a copy + of this software and associated documentation files (the "Software"), to deal + in the Software without restriction, including without limitation the rights + to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + copies of the Software, and to permit persons to whom the Software is + furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in all + copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE diff --git a/samples/python/m365-agents-sdk-a2a-patterns/README.md b/samples/python/m365-agents-sdk-a2a-patterns/README.md new file mode 100644 index 00000000..cf43681b --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/README.md @@ -0,0 +1,377 @@ +# M365 Agents SDK — A2A Pattern Reference Implementation + +A **sample reference implementation** demonstrating three complementary capabilities within the M365 Agents SDK ecosystem. Each capability is presented as a distinct learning track — with its own files, concepts, and business rationale — so developers can study them independently or follow them end-to-end. + +> **Scope**: The domain (brand search optimization) is intentionally non-trivial so pattern selection has real consequences. The business logic is secondary; the architectural patterns are the subject matter. + +--- + +## Three Learning Tracks + +This repository covers three areas that are often needed together but serve different engineering concerns: + +| # | Track | Core Question | Key Outcome | +|---|-------|---------------|-------------| +| 1 | [Intelligent Agent Orchestration](#track-1-intelligent-agent-orchestration-with-semantic-kernel) | *How do I add LLM reasoning and tool calling to an M365 agent?* | A Semantic Kernel `ChatCompletionAgent` that plans, invokes tools, and synthesizes responses | +| 2 | [Agent-to-Agent Communication](#track-2-agent-to-agent-communication-via-a2a-protocol) | *How does an M365 agent consume a remote agent over A2A?* | A fully wired A2A client (discovery → send → receive) inside an M365 Agents SDK host | +| 3 | [A2A Transmission Patterns](#track-3-a2a-transmission-patterns--choosing-the-right-delivery-model) | *Which A2A delivery model — sync, stream, or push — fits my use case?* | Three working patterns with test CLIs that exercise each one independently | + +--- + +### Track 1: Intelligent Agent Orchestration with Semantic Kernel + +**Business context** — Enterprise agents need more than hard-coded routing. A brand strategist might type *"Compare Nike and Adidas in the running shoe category and recommend an SEO action plan"* — a request that requires intent parsing, orchestrating multiple tools in sequence, and composing a coherent, strategic response. Semantic Kernel provides the planning layer that turns a natural-language query into a structured tool-calling workflow backed by Azure OpenAI. + +| Concept | Implementation | File | +|---------|---------------|------| +| `ChatCompletionAgent` setup with Azure OpenAI | Agent construction, service wiring, instruction prompt | [orchestrator.py](a2a-client-agent/brand_intelligence_advisor/orchestrator.py) | +| `@kernel_function` tool declarations | 4 tools: `analyze_brand`, `check_push_notifications`, `get_analysis_history`, `get_seo_glossary` | [orchestrator.py](a2a-client-agent/brand_intelligence_advisor/orchestrator.py) | +| System prompt engineering | Role definition, tool-usage instructions, response formatting | [prompt.py](a2a-client-agent/brand_intelligence_advisor/prompt.py) | +| M365 SDK message handler integration | Routing incoming messages to the SK orchestrator | [agent.py](a2a-client-agent/brand_intelligence_advisor/agent.py) | +| Graceful degradation (no LLM) | Falls back to regex-based command routing when Azure OpenAI is unavailable | [agent.py](a2a-client-agent/brand_intelligence_advisor/agent.py) | + +**What you will learn**: How to wire `AzureChatCompletion` into an SK agent, expose domain functions as `@kernel_function` tools, and let the LLM autonomously decide which tools to call and in what order — all within the M365 Agents SDK hosting model. + +--- + +### Track 2: Agent-to-Agent Communication via A2A Protocol + +**Business context** — No single agent has all the data. A brand intelligence advisor (M365 SDK) needs SEO analysis from a specialist agent (Google ADK) that has access to BigQuery datasets and SerpAPI. The [A2A protocol](https://google.github.io/a2a/#/) provides a vendor-neutral, JSON-RPC-based contract for this cross-platform communication — agent discovery, message exchange, task lifecycle, and push notifications — without coupling the two systems. + +| Concept | Implementation | File | +|---------|---------------|------| +| Agent discovery (`/.well-known/agent-card.json`) | `discover()` fetches the remote agent's capabilities | [a2a_client.py](a2a-client-agent/brand_intelligence_advisor/tools/a2a_client.py) | +| Message send (`message/send`) | `send_message()` posts a JSON-RPC request, parses the response | [a2a_client.py](a2a-client-agent/brand_intelligence_advisor/tools/a2a_client.py) | +| Message stream (`message/stream`) | `stream_message()` consumes an SSE event stream | [a2a_client.py](a2a-client-agent/brand_intelligence_advisor/tools/a2a_client.py) | +| Push notification registration | `register_push()` sets `pushNotificationConfig` with a webhook URL | [a2a_client.py](a2a-client-agent/brand_intelligence_advisor/tools/a2a_client.py) | +| Webhook receiver endpoint | `/a2a/webhook` POST handler stores incoming push notifications | [server.py](a2a-client-agent/brand_intelligence_advisor/server.py) | +| A2A producer (other side of the protocol) | Google ADK agent exposing A2A endpoints | [adk-agent/](adk-agent/) | + +**What you will learn**: How to implement a complete A2A client lifecycle — discover a remote agent, send it work, consume results via three different transports, and receive asynchronous push notifications — all hosted inside an M365 Agents SDK application. + +--- + +### Track 3: A2A Transmission Patterns — Choosing the Right Delivery Model + +**Business context** — The *same* query (*"Analyze Nike"*) can be delivered three different ways depending on the operational context. A brand manager checking rankings mid-meeting needs a fast, blocking response (ping). An analyst building a quarterly report wants to see data arrive progressively (stream). A marketing director scheduling overnight batch audits across 15 categories needs fire-and-forget with a webhook callback (push). Selecting the wrong pattern degrades user experience or wastes infrastructure. + +#### Pattern Reference + +| Pattern | A2A Method | Transport | Behavior | +|---------|-----------|-----------|----------| +| **Ping** | `message/send` | HTTP POST → JSON response | Blocks until complete | +| **Stream** | `message/stream` | HTTP POST → SSE event stream | Delivers chunks in real time | +| **Push** | `message/send` + `pushNotificationConfig/set` | HTTP POST + webhook callback | Returns immediately; result arrives asynchronously | + +#### When to Use Each Pattern + +| Pattern | Best For | Typical Scenario | Anti-Pattern | +|---------|----------|-----------------|--------------| +| **Ping** | Quick lookups, chatbot replies, CI pipeline checks | *"How is Nike ranking?"* before a call — answer needed in 30–60 s | Analysis takes >2 min (user stares at a spinner) | +| **Stream** | Detailed reports, live dashboards, executive demos | Quarterly review — analyst reads early findings while deeper analysis runs | Client cannot consume SSE events | +| **Push** | Batch jobs, overnight audits, mobile requests | *"Audit 10 brands"* then walk away — webhook notifies when done | Quick questions (webhook overhead not justified) | + +#### Decision Flowchart + +``` +User sends a query + │ + ▼ +Is the user actively waiting? + │ + ┌───┴───┐ + YES NO ──── Use PUSH (fire & forget + webhook) + │ + ▼ +Need real-time progress? + │ +┌───┴───┐ +YES NO ──────── Use PING (simple request/response) +│ +▼ +Use STREAM (SSE chunks as they arrive) +``` + +#### Test CLIs for Each Pattern + +| CLI | Command | What It Tests | Needs Azure OpenAI? | +|-----|---------|---------------|---------------------| +| `test_demo.py` | `python test_demo.py` | All 3 patterns via SK orchestrator — LLM selects pattern | Yes | +| `cli_test.py` | `python cli_test.py ping "Nike socks"` | Individual pattern, direct A2A call, no LLM layer | No | +| `cli_test.py` | `python cli_test.py all "Nike shoes"` | All 3 patterns sequentially | No | + +> **LLM-Orchestrated selection**: When Azure OpenAI is configured, the Semantic Kernel orchestrator reads user intent and selects the pattern automatically. *"Quick check on Nike"* → ping. *"Detailed report on Adidas"* → stream. *"Run this overnight"* → push. + +
+Expanded business scenarios for each pattern + +**Ping — Synchronous** +- **Pre-meeting pulse check**: Brand manager asks *"How is Nike ranking in Active?"* 5 min before a stakeholder call. Blocking is acceptable — the wait is short and the user is actively watching. +- **Teams bot integration**: User sends *"What's our share of voice for running shoes?"* in chat. Chatbot UX expects a single reply bubble — ping fits naturally. +- **CI pipeline validation**: Automated script verifies brand ranking hasn't dropped after a product title change. + +**Stream — Real-Time** +- **Quarterly competitive review**: *"Full analysis of Adidas in sportswear"* takes 60–90 s. Streaming lets the analyst read keyword data while deeper analysis continues. +- **Live dashboard**: SSE events drive real-time UI updates — each chunk paints a new row in the metrics table. +- **Executive demo**: Text appearing progressively creates a compelling *"AI thinking"* experience. + +**Push — Asynchronous** +- **Batch brand audit**: *"Audit Puma across all 15 categories"* runs for 5+ min. Push returns immediately; webhook fires when done. +- **Nightly cron job**: Orchestrator fires 10 push requests in parallel. Webhook callbacks post results to a Slack channel — no polling. +- **Mobile analysis**: User requests deep analysis while commuting. Push avoids cellular timeout risk; notification arrives when results are ready. + +
+ +**What you will learn**: How each A2A transmission pattern works at the protocol level, when to choose one over another, and how to test each independently using the provided CLIs. + +--- + +## Architecture + +``` +┌──────────────────────────────────────┐ ┌──────────────────────────────────────┐ +│ Brand Intelligence Advisor │ │ Brand Search Optimization │ +│ (M365 Agents SDK + Semantic Kernel) │ A2A Protocol │ (Google ADK + Gemini 2.0 Flash) │ +│ Port 3978 │ (JSON-RPC) │ Port 8080 │ +│ │ │ │ +│ Track 1: SK orchestrator + tools │ │ Multi-agent SEO analysis │ +│ Track 2: A2A client integration │ │ (keyword / search / comparison) │ +│ Track 3: Ping / Stream / Push │ │ │ +│ │ │ │ +│ Pattern 1: message/send (ping) │────────────────────▶ Synchronous blocking response │ +│ Pattern 2: message/stream (stream) │────────────────────▶ Server-Sent Events (SSE) │ +│ Pattern 3: send + webhook (push) │────────────────────▶ Background + push notification │ +│ │◀───────────────────│ Webhook callback with results │ +└──────────────────────────────────────┘ └──────────────────────────────────────┘ +``` + +--- + +## Repository Structure + +``` +m365-sdk-a2a-patterns/ +├── a2a-client-agent/ # A2A Consumer (M365 Agents SDK) +│ ├── brand_intelligence_advisor/ # Main agent package +│ │ ├── __init__.py # Package metadata +│ │ ├── agent.py # M365 SDK routes & message handlers +│ │ ├── orchestrator.py # Semantic Kernel LLM orchestrator +│ │ ├── prompt.py # System prompt for the LLM +│ │ ├── server.py # aiohttp server + webhook endpoint +│ │ └── tools/ # Tool implementations +│ │ ├── __init__.py +│ │ ├── a2a_client.py # A2A protocol client (all 3 patterns) +│ │ └── brand_advisor.py # Domain knowledge & query parsing +│ ├── run_server.py # Entry point (python run_server.py) +│ ├── test_demo.py # Interactive test CLI (SK orchestrator) +│ ├── cli_test.py # Direct A2A test CLI (no orchestrator) +│ ├── requirements.txt # Python dependencies +│ └── env.TEMPLATE # Environment variable template +│ +├── adk-agent/ # A2A Producer (Google ADK) +│ ├── brand_search_optimization/ # Multi-agent system +│ │ ├── agent.py # Root agent orchestration +│ │ ├── prompt.py # System prompts +│ │ ├── sub_agents/ # Sub-agent implementations +│ │ │ ├── keyword_finding/ # Keyword extraction from BigQuery +│ │ │ ├── search_results/ # Competitor intel via SerpAPI +│ │ │ └── comparison/ # SEO analysis with Gemini +│ │ ├── tools/ # BigQuery + SerpAPI connectors +│ │ └── shared_libraries/ # Constants & config +│ ├── run_a2a.py # A2A server entry point +│ ├── pyproject.toml # Dependencies (Poetry) +│ └── Dockerfile # Container image +│ +├── docs/ # Documentation +│ ├── A2A_PATTERNS.md # A2A pattern deep dive with sequence diagrams +│ └── ARCHITECTURE.md # System design and data flow +│ +├── env.example # ADK agent environment template +└── README.md # This file +``` + +--- + +## Quick Start + +### Prerequisites + +| Requirement | Purpose | How to Get | +|-------------|---------|------------| +| Python 3.10+ | Runtime | [python.org](https://www.python.org/downloads/) | +| Poetry 2.0+ | ADK agent dependencies | [python-poetry.org](https://python-poetry.org/) | +| Google Cloud project | BigQuery public dataset access | [console.cloud.google.com](https://console.cloud.google.com/) | +| Gemini API key | LLM for ADK agent (free) | [aistudio.google.com/apikey](https://aistudio.google.com/apikey) — **Forever Free tier** | +| Azure OpenAI endpoint | LLM orchestration (optional) | [Azure AI Foundry](https://ai.azure.com/) | + +> **Free API Keys**: The Gemini API key is available on Google's Forever Free tier (1,500 requests/day). SerpAPI offers 100 free searches/month at [serpapi.com](https://serpapi.com/). No credit card required for either. + +### Step 1: Start the ADK Agent (A2A Producer) + +```bash +cd adk-agent +cp ../env.example .env +# Edit .env: +# GOOGLE_API_KEY=your-gemini-api-key (from aistudio.google.com/apikey) +# GOOGLE_CLOUD_PROJECT=your-project-id (for BigQuery) + +poetry install +gcloud auth application-default login +python run_a2a.py +# Server running on http://localhost:8080 +``` + +Verify: `curl http://localhost:8080/.well-known/agent-card.json` + +### Step 2: Start the Client Agent (A2A Consumer) + +```bash +cd a2a-client-agent +cp env.TEMPLATE .env +# Edit .env: +# A2A_AGENT_URL=http://localhost:8080 +# AZURE_AI_FOUNDRY_ENDPOINT=https://your-resource.services.ai.azure.com (optional) +# AZURE_AI_FOUNDRY_API_KEY=your-key (optional) + +pip install -r requirements.txt +python run_server.py +# Server running on http://localhost:3978 +``` + +> Without Azure OpenAI configured, the agent falls back to regex-based command routing (ping/stream/push commands still work). + +### Step 3: Test with the Interactive CLI + +```bash +cd a2a-client-agent +python test_demo.py +``` + +The interactive CLI lets you test all 3 patterns with the SK orchestrator: + +``` +============================================================ + A2A Interactive Test Runner +============================================================ + Orchestrator : Semantic Kernel + Azure OpenAI + Remote Agent : Google ADK (A2A Protocol v0.3) + Framework : Microsoft 365 Agents SDK + +You> How is Nike doing in Active category? + Choose A2A transmission pattern: + [1] Ping - synchronous request/response + [2] Stream - SSE live typing + [3] Push - fire & forget + [4] Auto - let the LLM decide (default) + Mode [1/2/3/4, default=4]: 1 + + [PING] Sending to SK orchestrator... + [PING] Waiting for complete response... + +Advisor (34.5s) [pattern: ping]: + Based on the A2A analysis, here are the key findings for Nike + in the Active category... +``` + +### Step 4: Test Directly with the CLI (No Orchestrator) + +```bash +cd a2a-client-agent + +# Agent discovery +python cli_test.py discover + +# Test individual patterns +python cli_test.py ping "Nike socks" +python cli_test.py stream "Adidas shoes" +python cli_test.py push "Puma sneakers" +python cli_test.py status + +# Run all patterns in sequence +python cli_test.py all "Nike running shoes" + +# Interactive mode +python cli_test.py +``` + +--- + +## Security + +- **No hardcoded secrets**: All API keys and credentials are loaded from environment variables via `.env` files. +- **`.env` gitignored**: The `.gitignore` covers `.env`, `*.bak`, `*.key`, and `credentials.json`. +- **env.TEMPLATE**: Use the provided templates (`env.TEMPLATE` for client, `env.example` for ADK) as a starting point. Fill in your own keys. +- **Anonymous mode**: The client agent runs without MSAL authentication by default (for local development). Set the `CONNECTIONS__SERVICE_CONNECTION__SETTINGS__*` variables for authenticated mode. + +--- + +## Endpoints + +| Port | Endpoint | Method | Purpose | +|------|----------|--------|---------| +| 3978 | `/api/messages` | POST | M365 SDK message processing | +| 3978 | `/a2a/webhook` | POST | Receives push notifications from ADK agent | +| 3978 | `/a2a/webhook` | GET | Debug view of received notifications | +| 8080 | `/.well-known/agent-card.json` | GET | ADK agent card discovery | +| 8080 | `/` | POST | A2A JSON-RPC endpoint (all patterns) | + +--- + +## Documentation + +| Guide | Description | +|-------|-------------| +| [A2A Patterns Deep Dive](docs/A2A_PATTERNS.md) | Sequence diagrams, JSON-RPC payloads, and state machines for all 3 patterns | +| [Architecture](docs/ARCHITECTURE.md) | System design, data flow, and technology stack | +| [ADK Agent README](adk-agent/README.md) | ADK agent setup, deployment, and customization | +| [Client Agent README](a2a-client-agent/README.md) | Client agent architecture, SK orchestrator, and configuration | + +--- + +## Technology Stack + +| Layer | Technology | Purpose | +|-------|-----------|---------| +| **Client Agent** | M365 Agents SDK (Python) | Message handling, Teams/WebChat integration | +| **LLM Orchestration** | Semantic Kernel 1.40+ | Tool calling, prompt management, agent reasoning | +| **LLM Model** | Azure OpenAI (gpt-4o-mini) | Intent understanding, response synthesis | +| **A2A Protocol** | JSON-RPC 2.0 over HTTP | Agent-to-agent communication | +| **Producer Agent** | Google ADK 1.23+ | Multi-agent SEO analysis workflow | +| **Producer LLM** | Gemini 2.0 Flash | SEO analysis and recommendations (free tier) | +| **Data Source** | BigQuery public dataset | Product catalog (thelook_ecommerce) | +| **Search Data** | SerpAPI | Competitor search results (100 free/month) | + +--- + +## Further Reading + +- [Microsoft 365 Agents SDK](https://github.com/microsoft/agents) — Parent repository with additional samples and documentation +- [Agents SDK Documentation](https://aka.ms/M365-Agents-SDK-Docs) — Official docs +- [Agents for Python](https://github.com/microsoft/agents-for-python) — Python SDK source +- [A2A Protocol Specification](https://google.github.io/a2a/#/) — Agent-to-Agent protocol spec +- [Google ADK Documentation](https://google.github.io/adk-docs/) — Agent Development Kit docs +- [Semantic Kernel (Python)](https://pypi.org/project/semantic-kernel/) — SK orchestration framework + +--- + +## Disclaimer + +This is a **sample reference implementation** for learning and demonstration purposes. It shows how M365 Agents SDK agents can use the A2A protocol with different transmission patterns. It is not production-ready — users are responsible for security hardening, error handling, and deployment considerations before using in production. + +--- + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [https://cla.opensource.microsoft.com](https://cla.opensource.microsoft.com/). + +When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. + +## Trademarks + +This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies. + +## License + +[MIT](LICENSE) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/README.md b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/README.md new file mode 100644 index 00000000..fdbea0ca --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/README.md @@ -0,0 +1,200 @@ +# Brand Intelligence Advisor — A2A Client Agent + +An M365 Agents SDK agent that communicates with the Google ADK Brand Search Optimization agent via the **A2A (Agent-to-Agent) protocol**, with **Semantic Kernel** as the LLM orchestrator. + +## Architecture + +``` +User (Teams / WebChat / Test CLI) + │ + ▼ + ┌────────────────────────────────────────────┐ + │ Brand Intelligence Advisor │ + │ M365 Agents SDK + Semantic Kernel │ + │ Port 3978 │ + │ │ + │ ┌──────────────────────────────────────┐ │ + │ │ Semantic Kernel Orchestrator │ │ + │ │ ChatCompletionAgent (Azure OpenAI) │ │ + │ │ │ │ + │ │ Tools (via @kernel_function): │ │ + │ │ - analyze_brand → A2A Client │ │ + │ │ - check_push_notifications │ │ + │ │ - get_analysis_history │ │ + │ │ - get_seo_glossary │ │ + │ └──────────────┬───────────────────────┘ │ + │ │ │ + │ ┌──────────────▼───────────────────────┐ │ + │ │ A2A Client │ │ + │ │ - message/send (ping) │ │ + │ │ - message/stream (SSE) │ │ + │ │ - send + webhook (push) │ │ + │ └──────────────┬───────────────────────┘ │ + └─────────────────┼──────────────────────────┘ + │ A2A Protocol (JSON-RPC) + ▼ + ┌────────────────────────────────────────────┐ + │ Brand Search Optimization │ + │ (Google ADK + Gemini 2.0 Flash) │ + │ Port 8080 │ + │ │ + │ Sub-agents: │ + │ - keyword_finding (BigQuery) │ + │ - search_results (SerpAPI) │ + │ - comparison (Gemini LLM) │ + └────────────────────────────────────────────┘ +``` + +## How It Works + +The Semantic Kernel `ChatCompletionAgent` receives the user's natural language message, reasons about intent, and calls the appropriate tool: + +1. **User says**: *"How is Nike doing in shoes?"* +2. **SK reasons**: This is a brand analysis → call `analyze_brand(brand="Nike", category="Active", mode="ping")` +3. **Tool executes**: A2A Client sends `message/send` to the ADK agent +4. **SK synthesizes**: Raw data + strategic interpretation → response to user + +When Azure OpenAI is not configured, the agent falls back to regex-based command routing (`ping `, `stream `, etc.). + +## Project Structure + +``` +a2a-client-agent/ +├── brand_intelligence_advisor/ # Main agent package +│ ├── __init__.py # Package metadata +│ ├── agent.py # M365 SDK AgentApplication & message handlers +│ ├── orchestrator.py # Semantic Kernel ChatCompletionAgent + tools +│ ├── prompt.py # System prompt for the LLM +│ ├── server.py # aiohttp server (M365 + webhook endpoints) +│ └── tools/ # Tool implementations +│ ├── __init__.py +│ ├── a2a_client.py # A2A protocol client (ping, stream, push) +│ └── brand_advisor.py # Domain knowledge, query parsing, formatting +├── run_server.py # Entry point (python run_server.py) +├── test_demo.py # Interactive test CLI with SK orchestrator +├── cli_test.py # Direct A2A protocol test CLI +├── requirements.txt # Python dependencies +├── env.TEMPLATE # Environment variable template +└── README.md # This file +``` + +### Module Responsibilities + +| Module | Purpose | +|--------|---------| +| `agent.py` | M365 SDK `AgentApplication` — registers message handlers, delegates to orchestrator or fallback | +| `orchestrator.py` | Creates Semantic Kernel `ChatCompletionAgent` with `BrandToolsPlugin` (4 tools with `@kernel_function`) | +| `prompt.py` | System prompt defining the advisor persona, A2A pattern selection rules, and response formatting | +| `server.py` | aiohttp server with `/api/messages` (M365) and `/a2a/webhook` (push notification receiver) | +| `tools/a2a_client.py` | Async HTTP client implementing A2A v0.3.0 — `send_message()`, `stream_message()`, `send_with_push()` | +| `tools/brand_advisor.py` | Domain knowledge: 30+ brands, category mapping, SEO glossary, query parsing, history tracking | + +## Prerequisites + +- Python 3.10+ +- ADK agent running on port 8080 (see [ADK Agent README](../adk-agent/README.md)) +- Azure OpenAI endpoint (optional — enables SK orchestrator; without it, regex fallback works) + +## Quick Start + +### 1. Install dependencies + +```bash +cd a2a-client-agent +pip install -r requirements.txt +``` + +### 2. Configure environment + +```bash +cp env.TEMPLATE .env +``` + +Edit `.env`: +```bash +# Required — point to the ADK agent +A2A_AGENT_URL=http://localhost:8080 + +# Optional — enables Semantic Kernel LLM orchestration +AZURE_AI_FOUNDRY_ENDPOINT=https://your-resource.services.ai.azure.com +AZURE_AI_FOUNDRY_API_KEY=your-api-key +AZURE_AI_FOUNDRY_MODEL=gpt-4o-mini +``` + +### 3. Start the ADK agent (in another terminal) + +```bash +cd ../adk-agent +poetry run python run_a2a.py +``` + +### 4. Start this agent + +```bash +python run_server.py +``` + +Output: +``` ++------------------------------------------------------------------+ +| Brand Intelligence Advisor | +| M365 Agents SDK + Semantic Kernel + A2A Protocol | +| | +| LLM Orchestration: ENABLED (Semantic Kernel + Azure OpenAI) | ++------------------------------------------------------------------+ +``` + +### 5. Test with the interactive CLI + +```bash +python test_demo.py +``` + +## Endpoints + +| Endpoint | Method | Purpose | +|----------|--------|---------| +| `/api/messages` | POST | M365 SDK message processing (user <-> agent) | +| `/a2a/webhook` | POST | Receives A2A push notifications from ADK agent | +| `/a2a/webhook` | GET | Debug view of received push notifications | + +## Semantic Kernel Integration + +The orchestrator uses [Semantic Kernel](https://pypi.org/project/semantic-kernel/) 1.40+ with: + +- **`AzureChatCompletion`** service connecting to Azure OpenAI +- **`ChatCompletionAgent`** with a system prompt defining the advisor persona +- **`BrandToolsPlugin`** exposing 4 tools via `@kernel_function`: + +| Tool | What It Does | +|------|-------------| +| `analyze_brand` | Calls A2A Client → ADK agent; mode param selects ping/stream/push | +| `check_push_notifications` | Returns any push notifications received via webhook | +| `get_analysis_history` | Returns session history of past analyses | +| `get_seo_glossary` | Looks up SEO terms from the built-in glossary | + +The agent decides which tool to call and which A2A pattern to use based on natural language understanding of the user's request. + +## Fallback Mode (No LLM) + +When `AZURE_AI_FOUNDRY_ENDPOINT` is not set, the agent uses regex-based command routing: + +| Command | A2A Pattern | +|---------|-------------| +| `ping Nike socks` | `message/send` (synchronous) | +| `stream Adidas shoes` | `message/stream` (SSE) | +| `push Puma sneakers` | `message/send` + webhook | +| `status` | View webhook notifications | +| `help` / `glossary` / `strategy` / `history` | Local capabilities | + +## Dependencies + +``` +microsoft-agents-hosting-aiohttp # M365 SDK server +microsoft-agents-hosting-core # M365 SDK core +microsoft-agents-authentication-msal # M365 SDK auth +semantic-kernel>=1.40.0 # LLM orchestration +openai>=1.30.0 # Azure OpenAI client +httpx / httpx-sse # A2A HTTP client +pydantic>=2.0,<2.12 # Data validation (SK compatibility) +``` diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/__init__.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/__init__.py new file mode 100644 index 00000000..bdafd75e --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/__init__.py @@ -0,0 +1,19 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +Brand Intelligence Advisor — A2A Client Agent + +An AI-powered brand advisor that consumes a remote Brand Search Optimization +agent (Google ADK) via the A2A (Agent-to-Agent) protocol, orchestrated by +Microsoft Semantic Kernel + Azure OpenAI. + +Package structure: + agent.py — M365 Agents SDK routes and message handlers + orchestrator.py — Semantic Kernel ChatCompletionAgent (LLM + tools) + prompt.py — System prompt for the advisor persona + server.py — aiohttp server with webhook endpoint + tools/ — Tool implementations + a2a_client.py — A2A protocol client (ping, stream, push) + brand_advisor.py — Domain knowledge (brands, SEO glossary) +""" diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/agent.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/agent.py new file mode 100644 index 00000000..0b88c5d9 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/agent.py @@ -0,0 +1,306 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +Brand Intelligence Advisor — M365 Agents SDK Agent. + +This is the main agent module that bridges the M365 Agents SDK messaging +layer with the Semantic Kernel orchestrator and A2A protocol client. + +Architecture: + M365 Agents SDK (messaging from Teams / WebChat / CLI) + → AgentApplication message handler + → AgentOrchestrator (Semantic Kernel + Azure OpenAI) + → BrandToolsPlugin → A2A Client → Google ADK Agent + ← Strategic analysis response + ← M365 SDK response to user + +The LLM decides which A2A pattern to use based on user intent: + - ping (message/send) — quick synchronous analysis + - stream (message/stream) — real-time SSE streaming + - push (message/send + wh) — background with webhook notification + +When Azure AI Foundry is not configured, falls back to regex-based +command routing for basic functionality. +""" + +import sys +import logging +import traceback +from os import environ + +from dotenv import load_dotenv + +from microsoft_agents.hosting.aiohttp import CloudAdapter +from microsoft_agents.hosting.core import ( + Authorization, + AgentApplication, + TurnState, + TurnContext, + MemoryStorage, +) +from microsoft_agents.authentication.msal import MsalConnectionManager +from microsoft_agents.activity import load_configuration_from_env + +from .tools.a2a_client import A2AClient +from .tools.brand_advisor import BrandAdvisor + +logger = logging.getLogger(__name__) + +load_dotenv() +agents_sdk_config = load_configuration_from_env(environ) + + +# --------------------------------------------------------------------------- +# M365 Agents SDK Infrastructure +# --------------------------------------------------------------------------- + +STORAGE = MemoryStorage() +CONNECTION_MANAGER = MsalConnectionManager(**agents_sdk_config) +ADAPTER = CloudAdapter(connection_manager=CONNECTION_MANAGER) +AUTHORIZATION = Authorization(STORAGE, CONNECTION_MANAGER, **agents_sdk_config) + +AGENT_APP = AgentApplication[TurnState]( + storage=STORAGE, + adapter=ADAPTER, + authorization=AUTHORIZATION, + **agents_sdk_config, +) + + +# --------------------------------------------------------------------------- +# A2A Client + Advisor + Orchestrator Setup +# --------------------------------------------------------------------------- + +A2A_AGENT_URL = environ.get("A2A_AGENT_URL", "http://localhost:8080") +AGENT_HOST = environ.get("AGENT_HOST", "localhost") +AGENT_PORT = int(environ.get("AGENT_PORT", "3978")) + +a2a_client = A2AClient(base_url=A2A_AGENT_URL) +advisor = BrandAdvisor() + +# In-memory store for push notifications (populated by webhook in server.py) +push_notifications: list[dict] = [] + +# Webhook URL for this agent (ADK agent POSTs results here) +webhook_url = f"http://{AGENT_HOST}:{AGENT_PORT}/a2a/webhook" + +# Try to initialize the Semantic Kernel orchestrator +orchestrator = None +LLM_AVAILABLE = False + +try: + from .orchestrator import AgentOrchestrator + + orchestrator = AgentOrchestrator( + a2a_client=a2a_client, + advisor=advisor, + push_notifications=push_notifications, + webhook_url=webhook_url, + ) + LLM_AVAILABLE = True + logger.info("Semantic Kernel orchestrator initialized successfully") +except Exception as e: + logger.warning( + f"LLM orchestration not available: {e} -- " + f"Falling back to regex-based command routing. " + f"Set AZURE_AI_FOUNDRY_ENDPOINT in .env to enable LLM orchestration." + ) + + +# --------------------------------------------------------------------------- +# Welcome Handler +# --------------------------------------------------------------------------- + +@AGENT_APP.conversation_update("membersAdded") +async def on_members_added(context: TurnContext, _state: TurnState): + """Send welcome message when a user joins.""" + try: + card = await a2a_client.discover() + agent_name = card.name + logger.info(f"Connected to A2A agent: {agent_name}") + except Exception as e: + agent_name = "Brand Search Optimization" + logger.warning(f"Could not discover A2A agent: {e}") + + if LLM_AVAILABLE: + welcome = ( + f"# Brand Intelligence Advisor\n\n" + f"I'm your AI-powered brand intelligence advisor, connected to the " + f"**{agent_name}** agent via the A2A protocol.\n\n" + f"Just tell me what you need in natural language:\n\n" + f"- *\"How is Nike performing in shoes?\"*\n" + f"- *\"Compare Nike and Adidas in sportswear\"*\n" + f"- *\"Run a deep analysis on Puma Active\"*\n" + f"- *\"What does CTR mean?\"*\n\n" + f"I'll choose the best A2A communication pattern automatically " + f"and provide strategic insights on top of the raw data.\n\n" + f"*Powered by M365 Agents SDK + Semantic Kernel + A2A Protocol*" + ) + else: + welcome = advisor.get_help_text(agent_name) + + await context.send_activity(welcome) + return True + + +# --------------------------------------------------------------------------- +# Main Message Handler +# --------------------------------------------------------------------------- + +@AGENT_APP.activity("message") +async def on_message(context: TurnContext, _state: TurnState): + """ + All user messages flow through this single handler. + + LLM mode: SK orchestrator reasons about intent → calls tools → synthesizes + Fallback: Regex-based command routing (ping/stream/push/status/etc.) + """ + text = (context.activity.text or "").strip() + if not text: + return + + if LLM_AVAILABLE: + await _handle_llm(context, text) + else: + await _handle_fallback(context, text) + + +async def _handle_llm(context: TurnContext, text: str): + """Route message through the Semantic Kernel orchestrator.""" + conversation_id = context.activity.conversation.id or "default" + await context.send_activity({"type": "typing"}) + + try: + response = await orchestrator.process_message(text, conversation_id) + await context.send_activity(response) + except Exception as e: + logger.error(f"Orchestrator error: {e}") + traceback.print_exc() + await context.send_activity( + f"I encountered an error processing your request: {str(e)}\n\n" + f"Please try again." + ) + + +async def _handle_fallback(context: TurnContext, text: str): + """ + Regex-based fallback when LLM orchestration is not available. + Supports all 3 A2A patterns + local capabilities via command prefixes. + """ + text_lower = text.lower().strip() + + # ── A2A Pattern: Ping (message/send) ────────────────────────────── + if text_lower.startswith("ping "): + raw = text[5:].strip() + query = advisor.parse_query(raw) + if not query.is_valid: + await context.send_activity(f"Error: {query.error}") + return + try: + a2a_request = advisor.formulate_a2a_request(query) + task = await a2a_client.send_message(a2a_request) + response_text = task.message or f"Task completed: {task.status}" + formatted = advisor.format_executive_summary( + query.brand, response_text, "ping" + ) + advisor.record_analysis(query, "ping", response_text, task.message) + await context.send_activity(formatted) + except Exception as e: + await context.send_activity( + f"Ping failed: {e}\n\n" + f"Make sure the ADK agent is running at `{A2A_AGENT_URL}`" + ) + + # ── A2A Pattern: Stream (message/stream) ────────────────────────── + elif text_lower.startswith("stream "): + raw = text[7:].strip() + query = advisor.parse_query(raw) + if not query.is_valid: + await context.send_activity(f"Error: {query.error}") + return + try: + a2a_request = advisor.formulate_a2a_request(query) + collected = [] + async for event in a2a_client.stream_message(a2a_request): + if event.text: + collected.append(event.text) + full_response = "\n".join(collected) + advisor.record_analysis(query, "stream", full_response) + formatted = advisor.format_executive_summary( + query.brand, full_response, "sse" + ) + await context.send_activity(formatted) + except Exception as e: + await context.send_activity(f"Stream failed: {e}") + + # ── A2A Pattern: Push (message/send + webhook) ──────────────────── + elif text_lower.startswith("push "): + raw = text[5:].strip() + query = advisor.parse_query(raw) + if not query.is_valid: + await context.send_activity(f"Error: {query.error}") + return + try: + a2a_request = advisor.formulate_a2a_request(query) + task = await a2a_client.send_with_push(a2a_request, webhook_url) + formatted = advisor.format_push_acknowledgment( + query.brand, task.task_id + ) + advisor.record_analysis( + query, "push", f"Task submitted: {task.status}" + ) + await context.send_activity(formatted) + except Exception as e: + await context.send_activity(f"Push failed: {e}") + + # ── Status (check push notifications) ───────────────────────────── + elif text_lower == "status": + if not push_notifications: + await context.send_activity( + "No push notifications received yet. Use `push ` first." + ) + else: + lines = ["**Received Push Notifications**\n"] + for i, n in enumerate(push_notifications, 1): + tid = n.get("task_id", "?")[:12] + st = n.get("status", "?") + lines.append(f" {i}. Task `{tid}...` -- **{st}**") + await context.send_activity("\n".join(lines)) + + # ── Local capabilities (no A2A needed) ──────────────────────────── + elif text_lower == "help": + await context.send_activity(advisor.get_help_text()) + elif text_lower == "history": + await context.send_activity(advisor.get_history_summary()) + elif text_lower == "strategy": + await context.send_activity(advisor.get_strategy_tips()) + elif text_lower == "glossary": + await context.send_activity(advisor.get_glossary()) + elif text_lower.startswith("define "): + term = text[7:].strip() + defn = advisor.get_seo_definition(term) + await context.send_activity( + defn or f"Term '{term}' not found. Type `glossary` for all terms." + ) + else: + await context.send_activity( + "LLM orchestration is not configured. " + "Set `AZURE_AI_FOUNDRY_ENDPOINT` in your `.env` file.\n\n" + "Available commands: `ping `, `stream `, " + "`push `, `status`, `help`, `glossary`, `define `" + ) + + +# --------------------------------------------------------------------------- +# Error Handler +# --------------------------------------------------------------------------- + +@AGENT_APP.error +async def on_error(context: TurnContext, error: Exception): + """Global error handler for unhandled exceptions.""" + print(f"\n[on_turn_error] unhandled error: {error}", file=sys.stderr) + traceback.print_exc() + await context.send_activity( + f"An unexpected error occurred: {str(error)}\nPlease try again." + ) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/orchestrator.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/orchestrator.py new file mode 100644 index 00000000..e3ddb86c --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/orchestrator.py @@ -0,0 +1,361 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +Semantic Kernel Orchestrator for the Brand Intelligence Advisor. + +Architecture: + User message + → M365 Agents SDK (agent.py) + → AgentOrchestrator.process_message() + → SK ChatCompletionAgent (automatic tool calling + LLM reasoning) + → BrandToolsPlugin (@kernel_function methods) + → A2A Client → Google ADK Agent (remote analysis) + ← Tool result (JSON) + ← LLM-synthesized strategic response (markdown) + ← M365 SDK response + ← User + +What Semantic Kernel handles automatically: + - LLM chat completions loop (no manual while-loop) + - Function/tool calling dispatch (no manual JSON parsing) + - Tool schema generation from @kernel_function decorators + - Conversation history management + +What this file contains: + - BrandToolsPlugin: SK Plugin with 4 tools exposed via @kernel_function + - AgentOrchestrator: Creates and invokes the ChatCompletionAgent +""" + +import json +import logging +from os import environ +from typing import Annotated + +import semantic_kernel as sk +from semantic_kernel.agents import ChatCompletionAgent +from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion +from semantic_kernel.contents import ChatHistory +from semantic_kernel.functions import kernel_function + +from .prompt import SYSTEM_PROMPT +from .tools.a2a_client import A2AClient +from .tools.brand_advisor import BrandAdvisor + +logger = logging.getLogger(__name__) + + +# --------------------------------------------------------------------------- +# Semantic Kernel Plugin — replaces manual TOOL_DEFINITIONS + ToolExecutor +# --------------------------------------------------------------------------- + + +class BrandToolsPlugin: + """ + SK Plugin exposing brand analysis tools as @kernel_function methods. + + Semantic Kernel auto-generates tool schemas from the type annotations + and dispatches calls automatically — no manual JSON parsing needed. + + Tools: + analyze_brand — Call remote ADK agent via A2A (ping/stream/push) + check_push_notifications — View webhook results from background jobs + get_analysis_history — Session analysis history for comparisons + get_seo_glossary — SEO terminology definitions + """ + + def __init__( + self, + a2a_client: A2AClient, + advisor: BrandAdvisor, + push_notifications: list[dict], + webhook_url: str, + ): + self.a2a_client = a2a_client + self.advisor = advisor + self.push_notifications = push_notifications + self.webhook_url = webhook_url + + # ── Tool 1: Brand Analysis via A2A ──────────────────────────────────── + + @kernel_function( + name="analyze_brand", + description=( + "Analyze a brand's search optimization performance by calling " + "the remote A2A Brand Search Optimization agent. Choose the " + "appropriate communication mode based on the user's needs. " + "Returns structured JSON with the analysis results." + ), + ) + async def analyze_brand( + self, + brand: Annotated[str, "The brand name to analyze (e.g., Nike, Adidas, Puma)"], + mode: Annotated[str, "A2A pattern: 'ping' (synchronous), 'stream' (SSE), 'push' (background webhook)"], + category: Annotated[str, "Product category (e.g., Active, Tops & Tees). Map shoes/sportswear to Active. Use empty string if unknown."] = "", + ) -> str: + """Execute a brand analysis via the appropriate A2A pattern.""" + query = self.advisor.parse_query( + f"{brand} {category}".strip() if category else brand + ) + if not query.is_valid: + return json.dumps({"error": query.error}) + + a2a_request = self.advisor.formulate_a2a_request(query) + + # ── Ping: synchronous message/send ──────────────────────────── + if mode == "ping": + task = await self.a2a_client.send_message(a2a_request) + response_text = task.message or f"Task completed with status: {task.status}" + self.advisor.record_analysis(query, "ping", response_text, task.message) + return json.dumps({ + "pattern": "ping (message/send)", + "brand": query.brand, + "category": query.category, + "task_id": task.task_id, + "status": task.status, + "analysis": response_text, + }) + + # ── Stream: SSE message/stream ──────────────────────────────── + elif mode == "stream": + collected_chunks = [] + chunk_count = 0 + async for event in self.a2a_client.stream_message(a2a_request): + chunk_count += 1 + if event.text: + collected_chunks.append(event.text) + + full_response = "\n".join(collected_chunks) + self.advisor.record_analysis(query, "stream", full_response) + return json.dumps({ + "pattern": "stream (message/stream SSE)", + "brand": query.brand, + "category": query.category, + "chunks_received": chunk_count, + "analysis": full_response, + }) + + # ── Push: background with webhook notification ──────────────── + elif mode == "push": + task = await self.a2a_client.send_with_push(a2a_request, self.webhook_url) + self.advisor.record_analysis(query, "push", f"Task submitted: {task.status}") + return json.dumps({ + "pattern": "push (message/send + webhook)", + "brand": query.brand, + "category": query.category, + "task_id": task.task_id, + "status": task.status, + "webhook_url": self.webhook_url, + "note": ( + "Analysis is running in the background. " + "Use check_push_notifications to see results when ready." + ), + }) + + else: + return json.dumps({"error": f"Unknown mode: {mode}"}) + + # ── Tool 2: Check Push Notifications ────────────────────────────────── + + @kernel_function( + name="check_push_notifications", + description=( + "Check the status of background (push) analyses. Returns any " + "webhook notifications received from the remote A2A agent." + ), + ) + async def check_push_notifications(self) -> str: + """Return all received push notifications.""" + if not self.push_notifications: + return json.dumps({ + "notifications": [], + "message": ( + "No push notifications received yet. " + "Submit a brand analysis with mode='push' first." + ), + }) + + return json.dumps({ + "notifications": [ + { + "task_id": n.get("task_id", "unknown"), + "status": n.get("status", "unknown"), + "received_at": n.get("received_at", "unknown"), + "text": n.get("text", "")[:500], + } + for n in self.push_notifications + ], + "count": len(self.push_notifications), + }) + + # ── Tool 3: Analysis History ────────────────────────────────────────── + + @kernel_function( + name="get_analysis_history", + description=( + "Retrieve past brand analyses from the current session. " + "Use this for comparisons, trend analysis across the session, " + "or when the user asks about previous results." + ), + ) + async def get_analysis_history(self) -> str: + """Return session analysis history.""" + history = self.advisor.get_history_data() + if not history: + return json.dumps({ + "history": [], + "message": "No analyses performed yet in this session.", + }) + return json.dumps({"history": history, "count": len(history)}) + + # ── Tool 4: SEO Glossary ───────────────────────────────────────────── + + @kernel_function( + name="get_seo_glossary", + description=( + "Look up SEO and brand optimization terminology. " + "Use when the user asks 'what is CTR?', 'define SERP', etc." + ), + ) + async def get_seo_glossary( + self, + term: Annotated[str, "Specific term to define (e.g., 'CTR', 'SERP'). Use empty string for full glossary."] = "", + ) -> str: + """Return SEO glossary data or a specific term definition.""" + if term and term.strip(): + definition = self.advisor.get_seo_definition_raw(term.strip()) + if definition: + return json.dumps({"term": term, "definition": definition}) + else: + return json.dumps({ + "error": f"Term '{term}' not found in glossary", + "available_terms": self.advisor.get_glossary_terms(), + }) + else: + return json.dumps({"glossary": self.advisor.get_glossary_data()}) + + +# --------------------------------------------------------------------------- +# Agent Orchestrator — public interface for agent.py, test_demo.py, etc. +# --------------------------------------------------------------------------- + + +class AgentOrchestrator: + """ + LLM Orchestrator using Semantic Kernel + Azure OpenAI. + + Replaces the manual Chat Completions + function-calling loop with + Semantic Kernel's ChatCompletionAgent, which handles tool schema + generation, automatic function dispatch, and the LLM ↔ tool loop. + + Public interface: + __init__(a2a_client, advisor, push_notifications, webhook_url) + process_message(user_text, conversation_id) -> str + """ + + def __init__( + self, + a2a_client: A2AClient, + advisor: BrandAdvisor, + push_notifications: list[dict], + webhook_url: str, + ): + # ── Read Azure OpenAI config from environment ───────────────── + endpoint = environ.get("AZURE_AI_FOUNDRY_ENDPOINT", "") + api_key = environ.get("AZURE_AI_FOUNDRY_API_KEY", "") + + if not endpoint: + raise ValueError( + "AZURE_AI_FOUNDRY_ENDPOINT environment variable is required. " + "Set it to your Azure AI Services endpoint " + "(e.g., https://your-resource.services.ai.azure.com)." + ) + if not api_key: + raise ValueError( + "AZURE_AI_FOUNDRY_API_KEY environment variable is required. " + "Set it to your Azure AI Services API key." + ) + + model = environ.get("AZURE_AI_FOUNDRY_MODEL", "gpt-4o-mini") + + # ── 1. Create the Kernel ────────────────────────────────────── + self.kernel = sk.Kernel() + + # ── 2. Create Azure OpenAI chat completion service ──────────── + self.chat_service = AzureChatCompletion( + service_id="brand-advisor", + deployment_name=model, + endpoint=endpoint, + api_key=api_key, + api_version="2024-12-01-preview", + ) + + # ── 3. Register the plugin (tools auto-discovered) ─────────── + self.plugin = BrandToolsPlugin( + a2a_client=a2a_client, + advisor=advisor, + push_notifications=push_notifications, + webhook_url=webhook_url, + ) + self.kernel.add_plugin(self.plugin, plugin_name="brand") + + # ── 4. Create the ChatCompletionAgent ───────────────────────── + self.agent = ChatCompletionAgent( + service=self.chat_service, + kernel=self.kernel, + name="BrandIntelligenceAdvisor", + instructions=SYSTEM_PROMPT, + ) + + # ── 5. Per-conversation chat history ────────────────────────── + self._histories: dict[str, ChatHistory] = {} + + logger.info( + f"SK orchestrator ready: model={model}, " + f"endpoint={endpoint[:60]}..., " + f"plugin=brand (4 functions)" + ) + + def _get_history(self, conversation_id: str) -> ChatHistory: + """Get or create a ChatHistory for the given conversation.""" + if conversation_id not in self._histories: + self._histories[conversation_id] = ChatHistory() + return self._histories[conversation_id] + + async def process_message( + self, user_text: str, conversation_id: str + ) -> str: + """ + Process a user message through the SK ChatCompletionAgent. + + The agent automatically handles: + 1. Sending the message + history to the LLM + 2. Parsing any tool_calls in the response + 3. Executing the matching @kernel_function methods + 4. Feeding results back to the LLM + 5. Repeating until the LLM produces a final text response + """ + history = self._get_history(conversation_id) + history.add_user_message(user_text) + + logger.info( + f"Processing message via SK agent " + f"(conversation={conversation_id}, history={len(history)} messages)" + ) + + # The agent handles the entire LLM ↔ tool loop automatically + response_parts = [] + async for message in self.agent.invoke(history): + response_parts.append(str(message)) + + final_text = "\n".join(response_parts) if response_parts else "" + + # Trim history to prevent token overflow (keep last 40 messages) + if len(history) > 50: + trimmed = ChatHistory() + for msg in list(history)[-40:]: + trimmed.add_message(msg) + self._histories[conversation_id] = trimmed + logger.info("Trimmed conversation history to 40 messages") + + return final_text diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/prompt.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/prompt.py new file mode 100644 index 00000000..4ee8eaa9 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/prompt.py @@ -0,0 +1,81 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +System prompt for the Brand Intelligence Advisor. + +Defines the advisor persona, available A2A communication patterns, +product category mappings, and response formatting guidelines. + +This mirrors the ADK agent's prompt.py pattern — keeping prompts +separate from orchestration logic for easier tuning. +""" + +SYSTEM_PROMPT = """\ +You are the Brand Intelligence Advisor, an expert AI agent specializing in +brand search optimization strategy. + +## Your Role +You help marketing professionals and brand managers understand how their brands +perform in search engines (especially Google Shopping) and AI-powered answer +engines. You don't just fetch data -- you ANALYZE it, COMPARE brands, identify +TRENDS, and make strategic RECOMMENDATIONS. + +## Your Tools +You can communicate with a remote Brand Search Optimization agent (powered by +Google ADK) via the A2A (Agent-to-Agent) protocol. You have three communication +patterns available through the analyze_brand tool: + +- **mode="ping"**: Quick synchronous analysis via message/send. + Use for simple questions, single-brand lookups, or when speed matters. + +- **mode="stream"**: Detailed streaming analysis via message/stream. + Use for deep dives, comprehensive reports, or when the user wants extensive + data. The full streamed response is collected and returned to you. + +- **mode="push"**: Background processing with webhook notification via + message/send + pushNotificationConfig/set. + Use when the user wants to submit a job and check back later, or says + things like "run in background", "check later", "batch". + +## Important: Product Categories +The remote agent analyzes products from a BigQuery e-commerce dataset. +Valid categories include: Active, Tops & Tees, Fashion Hoodies & Sweatshirts, +Jeans, Swim, Shorts, Sleep & Lounge, Plus, Dresses, Pants, Outerwear & Coats, +Blazers & Jackets, Sweaters, Socks, Accessories. + +When the user says generic terms, map them to valid categories: +- "shoes" / "sneakers" / "sportswear" / "running" → category: "Active" +- "shirts" / "tops" → category: "Tops & Tees" +- "jackets" → category: "Blazers & Jackets" +- "pants" → category: "Pants" + +If the category is unclear, omit it — the remote agent will show available +categories and you can pick the most relevant one. + +## Your Intelligence (Beyond the Remote Agent) +- Compare multiple brands by calling analyze_brand multiple times, then + synthesize a comparative analysis in your own words. +- Track session history using get_analysis_history to answer "how did X + compare to Y earlier?" questions. +- Provide strategic recommendations based on analysis results. +- Explain SEO terminology using get_seo_glossary when users are confused. +- Remember conversation context for natural follow-up questions. + +## Guidelines +- When a user mentions a brand, understand their INTENT before choosing a tool. +- For comparisons (e.g., "Nike vs Adidas"), call analyze_brand for each brand, + then write your own comparative synthesis. +- Always add your own strategic insight on top of raw analysis data. +- Choose the A2A pattern based on the user's needs: + Quick question / "how is X doing?" --> ping + "Give me a detailed report" / "deep dive" --> stream + "Run in background" / "come back later" --> push +- If unsure which pattern, default to ping for simplicity. +- Format responses with clear structure: Summary, Key Findings, Recommendations. +- Briefly mention which A2A pattern you used (e.g., "Using synchronous ping + for a quick lookup..."). +- Be conversational but professional. Use markdown formatting. +- You are NOT the Brand Search Optimization agent -- you delegate to it and + then enrich the response with your own analysis. +""" diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/server.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/server.py new file mode 100644 index 00000000..99f6b311 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/server.py @@ -0,0 +1,175 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +Brand Intelligence Advisor — HTTP Server. + +Runs an aiohttp server with two endpoints: + - POST /api/messages → M365 SDK message processing (user <-> agent) + - POST /a2a/webhook → A2A push notification receiver (ADK agent -> this agent) + - GET /a2a/webhook → View received notifications (debug) + +The M365 endpoint handles Teams / WebChat / CLI interactions, while the +webhook endpoint receives asynchronous push notifications from the remote +ADK agent when background analyses complete (A2A push pattern). +""" + +import json +import logging +from os import environ +from datetime import datetime, timezone + +from microsoft_agents.hosting.core import AgentApplication, AgentAuthConfiguration +from microsoft_agents.hosting.aiohttp import ( + start_agent_process, + jwt_authorization_middleware, + CloudAdapter, +) +from aiohttp.web import Request, Response, Application, run_app, json_response + +logger = logging.getLogger(__name__) + + +def start_server( + agent_application: AgentApplication, + auth_configuration: AgentAuthConfiguration, + push_notifications: list, +): + """ + Start the aiohttp server with M365 message endpoint and A2A webhook. + + Args: + agent_application: The configured M365 AgentApplication instance. + auth_configuration: MSAL authentication configuration. + push_notifications: Shared list that the webhook handler appends + incoming notifications to, so the agent can + surface them to users via the 'status' command. + """ + + # ── M365 SDK Message Endpoint ───────────────────────────────────────── + + async def messages_entry_point(req: Request) -> Response: + """Handle incoming M365 SDK messages from users/channels.""" + agent: AgentApplication = req.app["agent_app"] + adapter: CloudAdapter = req.app["adapter"] + + logger.info(f"M365 message received from {req.remote}") + return await start_agent_process(req, agent, adapter) + + # ── A2A Webhook Endpoint (Push Notifications) ───────────────────────── + + async def a2a_webhook_handler(req: Request) -> Response: + """ + Receive push notifications from the ADK agent. + + The ADK agent sends a JSON-RPC message when it finishes processing + a background task (A2A push pattern). We parse the notification, + store it, and make it available for the user via the 'status' command. + """ + try: + body = await req.json() + logger.info("A2A push notification received!") + logger.info(f" Body: {json.dumps(body, indent=2)[:500]}") + + # Extract useful information — handle both JSON-RPC response + # and notification formats from the ADK push sender + token = req.headers.get("X-A2A-Notification-Token", "none") + result = body.get("result", body.get("params", body)) + task_id = result.get("id", result.get("taskId", "unknown")) + status_obj = result.get("status", {}) + status = ( + status_obj.get("state", "unknown") + if isinstance(status_obj, dict) + else str(status_obj) + ) + + # Extract text from artifacts or status message + text_parts = [] + for artifact in result.get("artifacts", []): + for part in artifact.get("parts", []): + if "text" in part: + text_parts.append(part["text"]) + + status_msg = result.get("status", {}).get("message", {}) + if isinstance(status_msg, dict): + for part in status_msg.get("parts", []): + if "text" in part: + text_parts.append(part["text"]) + + notification = { + "task_id": task_id, + "status": status, + "token": token, + "text": " ".join(text_parts) if text_parts else "", + "received_at": datetime.now(timezone.utc).strftime("%H:%M:%S UTC"), + "raw": body, + } + + push_notifications.append(notification) + logger.info( + f"Notification stored (total: {len(push_notifications)}) " + f"| Task: {task_id[:12]}... | Status: {status}" + ) + + return json_response({"status": "received"}, status=200) + + except Exception as e: + logger.error(f"Webhook handler error: {e}") + return json_response({"error": str(e)}, status=500) + + async def a2a_webhook_list(req: Request) -> Response: + """GET endpoint to inspect received push notifications (debug).""" + return json_response( + { + "total": len(push_notifications), + "notifications": [ + { + "task_id": n["task_id"], + "status": n["status"], + "received_at": n["received_at"], + "text_preview": n.get("text", "")[:200], + } + for n in push_notifications + ], + } + ) + + # ── Application Setup ───────────────────────────────────────────────── + + # Use JWT middleware for authenticated mode, or no middleware for anonymous + client_id = environ.get( + "CONNECTIONS__SERVICE_CONNECTION__SETTINGS__CLIENTID", "" + ) + if client_id: + middlewares = [jwt_authorization_middleware] + logger.info("Running in authenticated mode (MSAL)") + else: + middlewares = [] + logger.info("Running in anonymous mode (no MSAL auth)") + + APP = Application(middlewares=middlewares) + + # M365 SDK endpoint + APP.router.add_post("/api/messages", messages_entry_point) + + # A2A webhook endpoints + APP.router.add_post("/a2a/webhook", a2a_webhook_handler) + APP.router.add_get("/a2a/webhook", a2a_webhook_list) + + # Store references for handlers + APP["agent_configuration"] = auth_configuration + APP["agent_app"] = agent_application + APP["adapter"] = agent_application.adapter + + host = environ.get("AGENT_HOST", "localhost") + port = int(environ.get("AGENT_PORT", "3978")) + + logger.info(f"Starting Brand Intelligence Advisor on http://{host}:{port}") + logger.info(f" M365 messages: POST http://{host}:{port}/api/messages") + logger.info(f" A2A webhook: POST http://{host}:{port}/a2a/webhook") + logger.info(f" Webhook status: GET http://{host}:{port}/a2a/webhook") + + try: + run_app(APP, host=host, port=port) + except Exception as error: + raise error diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/__init__.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/__init__.py new file mode 100644 index 00000000..4a7a262c --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/__init__.py @@ -0,0 +1,9 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +Tools for the Brand Intelligence Advisor agent. + + a2a_client.py — A2A protocol client (message/send, message/stream, push) + brand_advisor.py — Domain knowledge layer (query parsing, SEO glossary) +""" diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/a2a_client.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/a2a_client.py new file mode 100644 index 00000000..768a1872 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/a2a_client.py @@ -0,0 +1,343 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +A2A Protocol Client — Implements all communication patterns for +interacting with a remote A2A-compliant agent (Google ADK Brand Search +Optimization Agent). + +Patterns: + 1. Ping (message/send) — Synchronous blocking request/response + 2. SSE (message/stream) — Server-Sent Events streaming + 3. Push (pushNotificationConfig/set + message/send) — Webhook-based async + 4. Webhook receiver — Handled server-side in server.py + +Usage: + from brand_intelligence_advisor.tools.a2a_client import A2AClient + + client = A2AClient("http://localhost:8080") + card = await client.discover() # fetch agent card + task = await client.send_message("Analyze Nike") # ping + async for event in client.stream_message("Analyze Adidas"): # stream + print(event.text) + task = await client.send_with_push("Analyze Puma", webhook_url) # push +""" + +import json +import uuid +import logging +from typing import AsyncIterator, Optional +from dataclasses import dataclass, field + +import httpx + +logger = logging.getLogger(__name__) + + +# ── Data Classes ────────────────────────────────────────────────────────────── + +@dataclass +class AgentCard: + """Discovered remote agent metadata from /.well-known/agent-card.json.""" + name: str + description: str + url: str + version: str + capabilities: dict = field(default_factory=dict) + skills: list = field(default_factory=list) + + +@dataclass +class A2ATask: + """Represents a task returned from the A2A agent.""" + task_id: str + context_id: str + status: str # submitted, working, completed, failed, canceled + message: Optional[str] = None + artifacts: list = field(default_factory=list) + + +@dataclass +class SSEEvent: + """A single Server-Sent Event from the A2A stream.""" + event_type: str # status, artifact, error + data: dict = field(default_factory=dict) + text: Optional[str] = None + + +# ── A2A Client ──────────────────────────────────────────────────────────────── + +class A2AClient: + """ + Client for the A2A (Agent-to-Agent) protocol v0.3.0. + + Supports all 4 communication patterns: + - discover() → GET /.well-known/agent-card.json + - send_message() → JSON-RPC message/send (ping/sync) + - stream_message() → JSON-RPC message/stream (SSE) + - register_push() → JSON-RPC pushNotificationConfig/set + + All methods use JSON-RPC 2.0 over HTTP as specified by the A2A protocol. + """ + + def __init__(self, base_url: str, timeout: float = 120.0): + """ + Args: + base_url: Root URL of the A2A agent (e.g. http://localhost:8080). + timeout: HTTP timeout in seconds (default 120s for large analyses). + """ + self.base_url = base_url.rstrip("/") + self._client = httpx.AsyncClient(timeout=timeout) + self._agent_card: Optional[AgentCard] = None + + async def close(self): + """Close the underlying HTTP client.""" + await self._client.aclose() + + # ── 0. Agent Discovery ──────────────────────────────────────────────── + + async def discover(self) -> AgentCard: + """ + Fetch the remote agent's Agent Card from the well-known endpoint. + This is the first step in any A2A interaction — discovering what + the remote agent can do and which patterns it supports. + """ + url = f"{self.base_url}/.well-known/agent-card.json" + logger.info(f"Discovering agent at {url}") + + resp = await self._client.get(url) + resp.raise_for_status() + data = resp.json() + + self._agent_card = AgentCard( + name=data.get("name", "Unknown"), + description=data.get("description", ""), + url=data.get("url", self.base_url), + version=data.get("version", "unknown"), + capabilities=data.get("capabilities", {}), + skills=data.get("skills", []), + ) + logger.info(f"Discovered: {self._agent_card.name} v{self._agent_card.version}") + return self._agent_card + + # ── 1. Ping Mode (message/send) ────────────────────────────────────── + + async def send_message( + self, text: str, context_id: Optional[str] = None + ) -> A2ATask: + """ + Send a synchronous (blocking) message to the A2A agent. + This is the 'ping' pattern — sends a request and waits for the + full response before returning. + + Best for: Quick lookups, single-brand checks, glossary queries. + """ + context_id = context_id or str(uuid.uuid4()) + request_id = str(uuid.uuid4()) + + payload = { + "jsonrpc": "2.0", + "id": request_id, + "method": "message/send", + "params": { + "message": { + "role": "user", + "parts": [{"kind": "text", "text": text}], + "messageId": str(uuid.uuid4()), + "contextId": context_id, + }, + }, + } + + logger.info(f"[PING] Sending message/send (context={context_id[:8]}...)") + resp = await self._client.post(self.base_url, json=payload) + resp.raise_for_status() + result = resp.json() + + return self._parse_task_response(result, context_id) + + # ── 2. SSE Mode (message/stream) ───────────────────────────────────── + + async def stream_message( + self, text: str, context_id: Optional[str] = None + ) -> AsyncIterator[SSEEvent]: + """ + Send a message and receive Server-Sent Events (SSE) stream. + Yields SSEEvent objects as they arrive from the remote agent. + + Best for: Detailed reports, real-time progress visibility. + """ + context_id = context_id or str(uuid.uuid4()) + request_id = str(uuid.uuid4()) + + payload = { + "jsonrpc": "2.0", + "id": request_id, + "method": "message/stream", + "params": { + "message": { + "role": "user", + "parts": [{"kind": "text", "text": text}], + "messageId": str(uuid.uuid4()), + "contextId": context_id, + }, + }, + } + + logger.info(f"[SSE] Sending message/stream (context={context_id[:8]}...)") + + async with self._client.stream( + "POST", + self.base_url, + json=payload, + headers={"Accept": "text/event-stream"}, + ) as response: + response.raise_for_status() + + event_type = "" + data_lines = [] + + async for line in response.aiter_lines(): + line = line.strip() + + if line.startswith("event:"): + event_type = line[6:].strip() + elif line.startswith("data:"): + data_lines.append(line[5:].strip()) + elif line == "" and data_lines: + # End of event block — emit it + raw_data = "\n".join(data_lines) + data_lines = [] + + try: + parsed = json.loads(raw_data) + except json.JSONDecodeError: + parsed = {"raw": raw_data} + + sse_event = SSEEvent( + event_type=event_type or "message", + data=parsed, + text=self._extract_text_from_event(parsed), + ) + logger.debug(f"[SSE] Event: {sse_event.event_type}") + yield sse_event + + event_type = "" + + # ── 3. Push Notification Config ────────────────────────────────────── + + async def register_push( + self, task_id: str, webhook_url: str, token: Optional[str] = None + ) -> dict: + """ + Register a webhook URL to receive push notifications for a task. + The remote agent will POST to this URL when the task completes. + + Best for: Long-running analyses where the user doesn't want to wait. + """ + token = token or f"m365-push-{uuid.uuid4().hex[:12]}" + request_id = str(uuid.uuid4()) + + payload = { + "jsonrpc": "2.0", + "id": request_id, + "method": "pushNotificationConfig/set", + "params": { + "taskId": task_id, + "pushNotificationConfig": { + "url": webhook_url, + "token": token, + }, + }, + } + + logger.info(f"[PUSH] Registering webhook for task {task_id[:8]}...") + resp = await self._client.post(self.base_url, json=payload) + resp.raise_for_status() + result = resp.json() + + if "error" in result: + logger.error(f"Push registration failed: {result['error']}") + return {"success": False, "error": result["error"]} + + logger.info(f"Push notification registered -> {webhook_url}") + return {"success": True, "token": token, "task_id": task_id} + + async def send_with_push( + self, text: str, webhook_url: str, context_id: Optional[str] = None + ) -> A2ATask: + """ + Send a message and register a push notification webhook. + Returns the initial task immediately (status updates arrive via webhook). + + This combines message/send + pushNotificationConfig/set into one call. + """ + # First send the message + task = await self.send_message(text, context_id) + + # Then register push for the task + push_result = await self.register_push(task.task_id, webhook_url) + if not push_result.get("success"): + logger.warning("Push registration failed but message was sent") + + return task + + # ── Helpers ────────────────────────────────────────────────────────── + + def _parse_task_response(self, result: dict, context_id: str) -> A2ATask: + """Parse a JSON-RPC response into an A2ATask dataclass.""" + if "error" in result: + return A2ATask( + task_id="", + context_id=context_id, + status="failed", + message=f"Error: {result['error'].get('message', 'Unknown error')}", + ) + + task_data = result.get("result", {}) + task_id = task_data.get("id", "") + status = task_data.get("status", {}).get("state", "unknown") + + # Extract text from artifacts + text_parts = [] + for artifact in task_data.get("artifacts", []): + for part in artifact.get("parts", []): + if part.get("kind") == "text" or "text" in part: + text_parts.append(part.get("text", "")) + + # Also check status message + status_msg = task_data.get("status", {}).get("message", {}) + if isinstance(status_msg, dict): + for part in status_msg.get("parts", []): + if "text" in part: + text_parts.append(part["text"]) + + return A2ATask( + task_id=task_id, + context_id=context_id, + status=status, + message="\n".join(text_parts) if text_parts else None, + artifacts=task_data.get("artifacts", []), + ) + + @staticmethod + def _extract_text_from_event(data: dict) -> Optional[str]: + """Extract readable text from an SSE event payload.""" + # Check result -> status -> message -> parts + result = data.get("result", data) + status = result.get("status", {}) + message = status.get("message", {}) + + if isinstance(message, dict): + parts = message.get("parts", []) + texts = [p.get("text", "") for p in parts if "text" in p] + if texts: + return "\n".join(texts) + + # Check artifacts + for artifact in result.get("artifacts", []): + for part in artifact.get("parts", []): + if "text" in part: + return part["text"] + + return None diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/brand_advisor.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/brand_advisor.py new file mode 100644 index 00000000..e5e1e8df --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/brand_intelligence_advisor/tools/brand_advisor.py @@ -0,0 +1,441 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +Brand Intelligence Advisor — Domain knowledge and local capabilities. + +This module provides the agent's LOCAL intelligence without needing +any LLM calls or external tool invocations: + + - Parse natural language brand queries -> extract brand, category, intent + - Track analysis history across the conversation + - Format raw A2A responses into clean executive summaries + - Provide SEO / brand strategy knowledge on demand + +The BrandAdvisor class is used by: + - The SK orchestrator (via @kernel_function tools that wrap its methods) + - The fallback regex router in agent.py (directly) + - The test_demo.py CLI (directly) +""" + +import re +import logging +from datetime import datetime, timezone +from typing import Optional +from dataclasses import dataclass, field + +logger = logging.getLogger(__name__) + + +# ── Domain Knowledge ────────────────────────────────────────────────────────── + +# Major consumer brands in sportswear, footwear, and electronics +KNOWN_BRANDS = { + "nike", "adidas", "puma", "reebok", "new balance", "under armour", + "asics", "skechers", "converse", "vans", "fila", "brooks", + "hoka", "on", "salomon", "columbia", "north face", "patagonia", + "lululemon", "gymshark", "allbirds", "crocs", "birkenstock", + "samsung", "apple", "sony", "lg", "bose", "jbl", +} + +# Categories that exist in the BigQuery public e-commerce dataset. +# When a user says a generic term (e.g., "shoes"), we map to the closest match. +PRODUCT_CATEGORIES = { + # Exact BigQuery categories + "active", "tops & tees", "fashion hoodies & sweatshirts", + "jeans", "swim", "shorts", "sleep & lounge", "plus", + "dresses", "skirts", "pants", "pants & capris", + "suits", "suits & sport coats", "socks", "underwear", + "accessories", "outerwear & coats", "blazers & jackets", + "sweaters", "leggings", + # Common user terms that map to BigQuery categories + "shoes", "sneakers", "sportswear", "running", "training", + "basketball", "football", "tennis", +} + +# Map common user terms to actual BigQuery categories +CATEGORY_MAP = { + "shoes": "Active", + "sneakers": "Active", + "sportswear": "Active", + "running": "Active", + "training": "Active", + "basketball": "Active", + "football": "Active", + "tennis": "Active", + "shirts": "Tops & Tees", + "hoodies": "Fashion Hoodies & Sweatshirts", + "jackets": "Blazers & Jackets", + "pants": "Pants", +} + +# SEO and brand optimization glossary — these definitions are surfaced +# to users when they ask "What is ?" or type "glossary" +SEO_GLOSSARY = { + "brand visibility": ( + "How often and prominently a brand appears in search results. " + "Higher visibility = more organic traffic and brand awareness." + ), + "keyword cannibalization": ( + "When multiple pages from the same brand compete for the same keyword, " + "diluting ranking power. Common in large product catalogs." + ), + "search impression share": ( + "The percentage of total impressions a brand receives for a keyword " + "compared to the total available impressions." + ), + "product title optimization": ( + "Crafting product titles with relevant keywords to improve search ranking. " + "Key factors: brand name position, keyword density, title length." + ), + "competitive gap analysis": ( + "Identifying keywords where competitors rank but your brand doesn't. " + "Reveals untapped opportunities for visibility improvement." + ), + "serp position": ( + "Search Engine Results Page position. Top 3 positions capture ~60% of clicks. " + "Position 1 alone gets ~28% of all clicks." + ), + "generic keyword": ( + "Non-branded search terms like 'running shoes' rather than 'Nike running shoes'. " + "Winning generic keywords drives new customer acquisition." + ), + "share of voice": ( + "A brand's share of total visibility across a set of target keywords. " + "Calculated as: (brand impressions / total impressions) x 100." + ), + "ctr": ( + "Click-Through Rate. The percentage of people who click on a search result " + "after seeing it. Formula: (clicks / impressions) x 100. Higher CTR indicates " + "better title/snippet optimization." + ), +} + +# Actionable strategy tips for brand optimization +STRATEGY_TIPS = [ + "**Title-first optimization**: Place the most relevant generic keyword " + "at the start of your product title, before the brand name.", + + "**Category coverage**: Ensure your brand has products indexed across " + "all relevant subcategories to maximize keyword footprint.", + + "**Competitor benchmarking**: Analyze the top 3 competitors for each " + "generic keyword to understand what title patterns rank highest.", + + "**Long-tail keywords**: Don't just target 'running shoes' -- target " + "'cushioned running shoes for flat feet' to capture high-intent traffic.", + + "**Seasonal keyword rotation**: Update product titles quarterly to " + "include trending seasonal terms (e.g., 'summer', 'back-to-school').", +] + + +# ── Parsed Query ────────────────────────────────────────────────────────────── + +@dataclass +class BrandQuery: + """Parsed user intent from a natural language message.""" + brand: Optional[str] = None + category: Optional[str] = None + raw_text: str = "" + is_valid: bool = False + error: Optional[str] = None + + +@dataclass +class AnalysisRecord: + """A record of a completed brand analysis.""" + brand: str + category: Optional[str] + timestamp: datetime + mode: str # ping, sse, push + result_summary: str + raw_response: Optional[str] = None + + +# ── Brand Advisor ───────────────────────────────────────────────────────────── + +class BrandAdvisor: + """ + The agent's local intelligence layer. + + Responsible for: + - Parsing natural language into structured brand queries + - Tracking analysis history across the session + - Formatting raw A2A responses into executive summaries + - Providing SEO/strategy domain knowledge + """ + + def __init__(self): + self._history: list[AnalysisRecord] = [] + + # ── Query Parsing ───────────────────────────────────────────────────── + + def parse_query(self, text: str) -> BrandQuery: + """ + Extract brand name and optional category from natural language. + + Examples: + "analyze Nike socks" -> brand=Nike, category=socks + "check Adidas running shoes" -> brand=Adidas, category=running shoes + "Nike" -> brand=Nike, category=None + "how is Puma doing" -> brand=Puma, category=None + """ + text_lower = text.lower().strip() + + # Remove command prefixes + for prefix in [ + "analyze", "check", "search", "look up", "find", + "how is", "how are", "what about", "show me", + "brand analysis for", "analysis of", "report on", + ]: + if text_lower.startswith(prefix): + text_lower = text_lower[len(prefix):].strip() + break + + # Remove trailing noise + for suffix in [ + "doing", "performing", "ranking", "on google", + "on google shopping", "please", "thanks", + ]: + if text_lower.endswith(suffix): + text_lower = text_lower[: -len(suffix)].strip() + break + + # Try to match known brands (longest first to avoid partial matches) + brand = None + category = None + + for known_brand in sorted(KNOWN_BRANDS, key=len, reverse=True): + if known_brand in text_lower: + brand = known_brand.title() + remaining = text_lower.replace(known_brand, "").strip() + + # Remove possessive suffix (e.g., "nike's shoes" -> "shoes") + if remaining.startswith("'s "): + remaining = remaining[3:].strip() + elif remaining.startswith("'s"): + remaining = remaining[2:].strip() + + # Try to match category from remaining text + if remaining: + for cat in sorted(PRODUCT_CATEGORIES, key=len, reverse=True): + if cat in remaining: + # Map to actual BigQuery category if needed + category = CATEGORY_MAP.get(cat, cat.title()) + break + if not category and remaining: + # Use whatever's left, try mapping it + category = CATEGORY_MAP.get(remaining, remaining.title()) + + break + + if not brand: + # Try capitalized words as potential brand names + words = text.strip().split() + capitalized = [w for w in words if w[0].isupper() and len(w) > 1] + if capitalized: + brand = capitalized[0] + else: + return BrandQuery( + raw_text=text, + is_valid=False, + error="Could not identify a brand name. Try: 'ping Nike socks'", + ) + + return BrandQuery( + brand=brand, + category=category, + raw_text=text, + is_valid=True, + ) + + # ── A2A Request Formulation ─────────────────────────────────────────── + + def formulate_a2a_request(self, query: BrandQuery) -> str: + """ + Build a plain-text request string for the ADK agent. + The ADK agent handles category lookup and analysis internally. + """ + brand = query.brand + if query.category: + return f"Analyze {brand} in {query.category} category" + return f"Analyze {brand}" + + # ── History Tracking ────────────────────────────────────────────────── + + def record_analysis( + self, + query: BrandQuery, + mode: str, + result_summary: str, + raw_response: Optional[str] = None, + ): + """Store a completed analysis in the session history.""" + record = AnalysisRecord( + brand=query.brand or "Unknown", + category=query.category, + timestamp=datetime.now(timezone.utc), + mode=mode, + result_summary=result_summary[:500], + raw_response=raw_response, + ) + self._history.append(record) + logger.info( + f"Recorded analysis #{len(self._history)}: {record.brand} ({mode})" + ) + + def get_history_summary(self) -> str: + """Format analysis history for display.""" + if not self._history: + return "No analyses performed yet in this session." + + lines = ["**Analysis History**\n"] + for i, rec in enumerate(self._history, 1): + cat = f" ({rec.category})" if rec.category else "" + time_str = rec.timestamp.strftime("%H:%M:%S") + lines.append( + f" {i}. **{rec.brand}**{cat} -- via `{rec.mode}` at {time_str}\n" + f" _{rec.result_summary[:120]}..._" + ) + + return "\n".join(lines) + + @property + def analysis_count(self) -> int: + return len(self._history) + + @property + def brands_analyzed(self) -> list[str]: + return list({r.brand for r in self._history}) + + # ── Response Formatting ─────────────────────────────────────────────── + + def format_executive_summary( + self, brand: str, raw_response: str, mode: str + ) -> str: + """ + Wrap a raw A2A response with executive-level context. + Adds strategic framing without hallucinating data. + """ + header = f"**Brand Intelligence Report: {brand}**\n" + mode_label = {"ping": "Synchronous", "sse": "Streamed", "push": "Async"}.get( + mode, mode + ) + meta = ( + f"_Mode: {mode_label} | Generated: " + f"{datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M UTC')}_\n" + ) + divider = "-" * 40 + "\n" + + body = ( + raw_response if raw_response else "_No data returned from analysis agent._" + ) + + footer = ( + f"\n{divider}" + f"**What to do next**: Use these insights to optimize product titles, " + f"adjust bidding strategy on underperforming keywords, and monitor " + f"competitor movements weekly.\n" + f"\nType `history` to see past analyses or `strategy` for optimization tips." + ) + + return f"{header}{meta}{divider}{body}{footer}" + + def format_sse_chunk(self, chunk_text: str, chunk_number: int) -> str: + """Format a single SSE chunk for display.""" + if chunk_number == 1: + return f"**Streaming analysis...**\n\n{chunk_text}" + return chunk_text + + def format_push_acknowledgment(self, brand: str, task_id: str) -> str: + """Format the push notification registration acknowledgment.""" + return ( + f"**Background analysis started for {brand}**\n\n" + f"Task ID: `{task_id[:12]}...`\n" + f"You'll receive a notification when the analysis completes.\n" + f"Type `status` to check for received notifications." + ) + + # ── SEO Knowledge ───────────────────────────────────────────────────── + + def get_seo_definition(self, term: str) -> Optional[str]: + """Look up an SEO term definition (formatted for display).""" + term_lower = term.lower().strip() + for key, definition in SEO_GLOSSARY.items(): + if term_lower in key or key in term_lower: + return f"**{key.title()}**\n\n{definition}" + return None + + def get_strategy_tips(self) -> str: + """Return brand optimization strategy tips.""" + header = "**Brand Search Optimization Strategy Tips**\n\n" + tips = "\n\n".join( + f"{i}. {tip}" for i, tip in enumerate(STRATEGY_TIPS, 1) + ) + return f"{header}{tips}" + + def get_glossary(self) -> str: + """Return the full SEO glossary.""" + header = "**SEO & Brand Optimization Glossary**\n\n" + entries = [] + for term, definition in sorted(SEO_GLOSSARY.items()): + entries.append(f"**{term.title()}**: {definition}") + return header + "\n\n".join(entries) + + # ── Data Accessors (for LLM orchestrator) ───────────────────────────── + + def get_history_data(self) -> list[dict]: + """Return raw history data as dicts (for JSON serialization by SK tools).""" + return [ + { + "brand": rec.brand, + "category": rec.category, + "mode": rec.mode, + "timestamp": rec.timestamp.strftime("%Y-%m-%d %H:%M:%S UTC"), + "result_summary": rec.result_summary[:300], + } + for rec in self._history + ] + + def get_glossary_data(self) -> dict: + """Return raw glossary as a dict (for JSON serialization by SK tools).""" + return dict(SEO_GLOSSARY) + + def get_glossary_terms(self) -> list[str]: + """Return list of available glossary term names.""" + return sorted(SEO_GLOSSARY.keys()) + + def get_seo_definition_raw(self, term: str) -> Optional[str]: + """Look up an SEO term and return the plain definition (no formatting).""" + term_lower = term.lower().strip() + for key, definition in SEO_GLOSSARY.items(): + if term_lower in key or key in term_lower: + return definition + return None + + # ── Help Text ───────────────────────────────────────────────────────── + + def get_help_text(self, agent_name: str = "Brand Search Optimization") -> str: + """Return the help/welcome message (used in regex fallback mode).""" + return ( + f"**Brand Intelligence Advisor**\n" + f"_Powered by M365 Agents SDK + A2A Protocol_\n\n" + f"I help you analyze brand visibility on Google Shopping by connecting " + f"to the **{agent_name}** agent via A2A protocol.\n\n" + f"**A2A Communication Modes:**\n" + f" - `ping [category]` -- Synchronous analysis (message/send)\n" + f" - `stream [category]` -- Live-streamed analysis (SSE)\n" + f" - `push [category]` -- Background analysis with webhook notification\n" + f" - `status` -- Check received push notifications\n\n" + f"**Local Capabilities:**\n" + f" - `history` -- View past analyses from this session\n" + f" - `strategy` -- Brand optimization strategy tips\n" + f" - `glossary` -- SEO terminology definitions\n" + f" - `define ` -- Look up a specific SEO term\n" + f" - `help` -- This message\n\n" + f"**Example:**\n" + f" `ping Nike socks`\n" + f" `stream Adidas running shoes`\n" + f" `push Puma sneakers`\n" + ) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/cli_test.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/cli_test.py new file mode 100644 index 00000000..92897e47 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/cli_test.py @@ -0,0 +1,535 @@ +#!/usr/bin/env python3 +""" +CLI Test Client for Brand Intelligence Advisor + +Tests all 4 A2A communication patterns against the ADK agent directly, +and the A2A client agent's webhook endpoint for push notifications. + +Usage: + python cli_test.py # Interactive menu + python cli_test.py discover # Discover agent card + python cli_test.py ping "Nike socks" # Synchronous message/send + python cli_test.py stream "Adidas" # SSE message/stream + python cli_test.py push "Puma shoes" # Push notification + webhook + python cli_test.py status # Check webhook notifications + python cli_test.py all "Nike socks" # Run all patterns sequentially + +Requires: httpx, httpx-sse (already in requirements.txt) +""" + +import asyncio +import json +import sys +import uuid +import time +from typing import Optional + +import httpx + +# ── Configuration ───────────────────────────────────────────────────────────── + +ADK_AGENT_URL = "http://localhost:8080" +M365_WEBHOOK_URL = "http://localhost:3978/a2a/webhook" + +TIMEOUT = httpx.Timeout(connect=10.0, read=120.0, write=10.0, pool=10.0) + + +# ── Colors for terminal output ──────────────────────────────────────────────── + +class C: + """ANSI color codes for terminal output.""" + HEADER = "\033[95m" + BLUE = "\033[94m" + CYAN = "\033[96m" + GREEN = "\033[92m" + YELLOW = "\033[93m" + RED = "\033[91m" + BOLD = "\033[1m" + DIM = "\033[2m" + END = "\033[0m" + + +def banner(): + print(f"""{C.CYAN}{C.BOLD} +╔══════════════════════════════════════════════════════════════╗ +║ A2A CLI Test Client — Brand Intelligence ║ +║ ║ +║ ADK Agent: {ADK_AGENT_URL:<44s} ║ +║ M365 Webhook: {M365_WEBHOOK_URL:<43s} ║ +╚══════════════════════════════════════════════════════════════╝{C.END} +""") + + +def section(title: str): + print(f"\n{C.BOLD}{C.BLUE}{'─' * 60}") + print(f" {title}") + print(f"{'─' * 60}{C.END}\n") + + +def ok(msg: str): + print(f" {C.GREEN}✓{C.END} {msg}") + + +def warn(msg: str): + print(f" {C.YELLOW}⚠{C.END} {msg}") + + +def err(msg: str): + print(f" {C.RED}✗{C.END} {msg}") + + +def info(msg: str): + print(f" {C.DIM}→{C.END} {msg}") + + +# ── A2A Protocol Helpers ────────────────────────────────────────────────────── + +def make_jsonrpc(method: str, params: dict) -> dict: + """Build a JSON-RPC 2.0 request.""" + return { + "jsonrpc": "2.0", + "id": str(uuid.uuid4())[:8], + "method": method, + "params": params, + } + + +def make_message_params(text: str, context_id: Optional[str] = None) -> dict: + """Build message/send or message/stream params.""" + return { + "message": { + "role": "user", + "parts": [{"kind": "text", "text": text}], + "messageId": str(uuid.uuid4()), + }, + "configuration": { + "acceptedOutputModes": ["text/plain"], + }, + **({"contextId": context_id} if context_id else {}), + } + + +# ── Pattern A: Discover ────────────────────────────────────────────────────── + +async def discover(client: httpx.AsyncClient) -> dict: + """GET /.well-known/agent-card.json — discover agent capabilities.""" + section("DISCOVER — Agent Card") + + resp = await client.get(f"{ADK_AGENT_URL}/.well-known/agent-card.json") + resp.raise_for_status() + card = resp.json() + + ok(f"Agent: {C.BOLD}{card['name']}{C.END}") + ok(f"Protocol: {card.get('protocolVersion', '?')}") + ok(f"Description: {card.get('description', '?')[:80]}") + + caps = card.get("capabilities", {}) + ok(f"Streaming: {'✓' if caps.get('streaming') else '✗'}") + ok(f"Push Notifs: {'✓' if caps.get('pushNotifications') else '✗'}") + ok(f"State History: {'✓' if caps.get('stateTransitionHistory') else '✗'}") + + skills = card.get("skills", []) + ok(f"Skills: {len(skills)}") + for s in skills[:5]: + info(f"{s['name']} — {s.get('description', '')[:60]}...") + + return card + + +# ── Pattern B: Ping (message/send) ─────────────────────────────────────────── + +async def ping(client: httpx.AsyncClient, query: str) -> dict: + """POST message/send — synchronous blocking call.""" + section(f"PING — message/send (synchronous)") + info(f"Query: \"{query}\"") + info("Sending JSON-RPC request... (blocking until agent responds)") + + payload = make_jsonrpc("message/send", make_message_params(query)) + start = time.time() + + resp = await client.post(f"{ADK_AGENT_URL}/", json=payload) + elapsed = time.time() - start + resp.raise_for_status() + + result = resp.json() + + if "error" in result: + err(f"JSON-RPC error: {json.dumps(result['error'], indent=2)}") + return result + + task = result.get("result", {}) + task_id = task.get("id", "unknown") + status = task.get("status", {}).get("state", "unknown") + + ok(f"Task ID: {task_id[:20]}...") + ok(f"Status: {status}") + ok(f"Elapsed: {elapsed:.1f}s") + + # Extract text from artifacts + texts = [] + for artifact in task.get("artifacts", []): + for part in artifact.get("parts", []): + if "text" in part: + texts.append(part["text"]) + + # Also check status message + status_msg = task.get("status", {}).get("message", {}) + if isinstance(status_msg, dict): + for part in status_msg.get("parts", []): + if "text" in part: + texts.append(part["text"]) + + if texts: + full_text = "\n".join(texts) + print(f"\n {C.CYAN}{'─' * 50}") + print(f" Agent Response ({len(full_text)} chars):") + print(f" {'─' * 50}{C.END}") + # Print with indentation + for line in full_text[:2000].split("\n"): + print(f" {C.DIM}│{C.END} {line}") + if len(full_text) > 2000: + warn(f"... truncated ({len(full_text) - 2000} more chars)") + else: + warn("No text content in response") + + return result + + +# ── Pattern C: Stream (message/stream) ─────────────────────────────────────── + +async def stream(client: httpx.AsyncClient, query: str) -> list: + """POST message/stream — Server-Sent Events streaming.""" + section(f"STREAM — message/stream (SSE)") + info(f"Query: \"{query}\"") + info("Opening SSE connection...") + + payload = make_jsonrpc("message/stream", make_message_params(query)) + events = [] + chunk_count = 0 + start = time.time() + + try: + # Use httpx-sse for proper SSE parsing + from httpx_sse import aconnect_sse + + async with aconnect_sse( + client, "POST", f"{ADK_AGENT_URL}/", json=payload + ) as event_source: + async for sse in event_source.aiter_sse(): + chunk_count += 1 + data = json.loads(sse.data) if sse.data else {} + events.append({"event": sse.event, "data": data}) + + # Extract text if present + result = data.get("result", {}) + status = result.get("status", {}).get("state", "") + + texts = [] + for artifact in result.get("artifacts", []): + for part in artifact.get("parts", []): + if "text" in part: + texts.append(part["text"]) + + status_msg = result.get("status", {}).get("message", {}) + if isinstance(status_msg, dict): + for part in status_msg.get("parts", []): + if "text" in part: + texts.append(part["text"]) + + if texts: + text = " ".join(texts) + print( + f" {C.GREEN}▸{C.END} Chunk #{chunk_count} " + f"[{status or 'data'}]: {text[:120]}{'...' if len(text) > 120 else ''}" + ) + elif status: + print(f" {C.YELLOW}◉{C.END} Chunk #{chunk_count} — status: {status}") + + except ImportError: + warn("httpx-sse not installed — falling back to raw streaming") + + async with client.stream("POST", f"{ADK_AGENT_URL}/", json=payload) as resp: + buffer = "" + async for chunk in resp.aiter_text(): + buffer += chunk + while "\n\n" in buffer: + event_text, buffer = buffer.split("\n\n", 1) + chunk_count += 1 + for line in event_text.split("\n"): + if line.startswith("data:"): + try: + data = json.loads(line[5:].strip()) + events.append({"event": "message", "data": data}) + print( + f" {C.GREEN}▸{C.END} Chunk #{chunk_count}: " + f"{json.dumps(data)[:120]}" + ) + except json.JSONDecodeError: + pass + + except Exception as e: + err(f"SSE streaming error: {e}") + + elapsed = time.time() - start + ok(f"Stream complete: {chunk_count} events in {elapsed:.1f}s") + + return events + + +# ── Pattern D: Push (message/send + webhook) ───────────────────────────────── + +async def push(client: httpx.AsyncClient, query: str) -> dict: + """ + Push notification pattern: + 1. Register webhook via pushNotificationConfig/set + 2. Send message/send with the task + 3. Poll M365 webhook for received notifications + """ + section(f"PUSH — message/send + webhook notification") + info(f"Query: \"{query}\"") + info(f"Webhook: {M365_WEBHOOK_URL}") + + # Step 1: Send message/send with inline push notification config + # The A2A protocol allows embedding pushNotificationConfig in the message + # configuration so the agent registers the webhook automatically. + token = f"cli-test-{uuid.uuid4().hex[:8]}" + params = make_message_params(query) + params["configuration"]["pushNotificationConfig"] = { + "url": M365_WEBHOOK_URL, + "token": token, + } + + info("Step 1: Sending message/send with inline pushNotificationConfig...") + msg_payload = make_jsonrpc("message/send", params) + start = time.time() + + resp = await client.post(f"{ADK_AGENT_URL}/", json=msg_payload) + resp.raise_for_status() + result = resp.json() + + if "error" in result: + err(f"message/send error: {json.dumps(result['error'], indent=2)}") + return result + + task = result.get("result", {}) + task_id = task.get("id", "unknown") + task_status = task.get("status", {}).get("state", "unknown") + ok(f"Task created: {task_id[:20]}... (status: {task_status})") + ok(f"Webhook token: {token}") + + # Step 2 (optional): Explicitly register via tasks/pushNotificationConfig/set + # This is an alternative approach — the inline config above should suffice, + # but we also try the explicit RPC for demonstration. + info("Step 2: Also registering via tasks/pushNotificationConfig/set...") + push_config_payload = make_jsonrpc("tasks/pushNotificationConfig/set", { + "taskId": task_id, + "pushNotificationConfig": { + "url": M365_WEBHOOK_URL, + "token": token, + }, + }) + + push_resp = await client.post(f"{ADK_AGENT_URL}/", json=push_config_payload) + push_resp.raise_for_status() + push_result = push_resp.json() + + if "error" in push_result: + warn(f"Explicit pushNotificationConfig/set: {push_result.get('error', {}).get('message', 'error')}") + else: + ok("Explicit webhook registration also succeeded") + + elapsed = time.time() - start + ok(f"Push setup complete in {elapsed:.1f}s") + + # Step 3: Check if notification arrived at M365 webhook + info("Step 3: Checking M365 webhook for notifications (waiting 3s)...") + await asyncio.sleep(3) + + try: + webhook_resp = await client.get(M365_WEBHOOK_URL) + webhook_data = webhook_resp.json() + total = webhook_data.get("total", 0) + + if total > 0: + ok(f"M365 webhook has {total} notification(s):") + for n in webhook_data.get("notifications", []): + info( + f"Task: {n.get('task_id', '?')[:12]}... " + f"| Status: {n.get('status', '?')} " + f"| At: {n.get('received_at', '?')}" + ) + preview = n.get("text_preview", "") + if preview: + info(f" → {preview[:120]}") + else: + warn("No notifications at webhook yet (may still be processing)") + except Exception as e: + warn(f"Could not reach M365 webhook: {e}") + + return result + + +# ── Status Check ────────────────────────────────────────────────────────────── + +async def status(client: httpx.AsyncClient): + """GET M365 webhook — check received push notifications.""" + section("STATUS — Webhook Notifications") + + try: + resp = await client.get(M365_WEBHOOK_URL) + resp.raise_for_status() + data = resp.json() + + total = data.get("total", 0) + if total == 0: + warn("No push notifications received yet") + info("Use 'push ' to trigger one first") + return + + ok(f"{total} notification(s) received:") + for i, n in enumerate(data.get("notifications", []), 1): + print( + f"\n {C.CYAN}#{i}{C.END} " + f"Task: {n.get('task_id', '?')[:16]}... " + f"| Status: {C.BOLD}{n.get('status', '?')}{C.END} " + f"| At: {n.get('received_at', '?')}" + ) + preview = n.get("text_preview", "") + if preview: + for line in preview[:300].split("\n"): + print(f" {C.DIM}│{C.END} {line}") + + except Exception as e: + err(f"Could not reach A2A client webhook at {M365_WEBHOOK_URL}: {e}") + info("Is the A2A client agent running on port 3978?") + + +# ── Run All Patterns ───────────────────────────────────────────────────────── + +async def run_all(client: httpx.AsyncClient, query: str): + """Run discover + all 3 A2A patterns sequentially.""" + section("RUNNING ALL 4 PATTERNS") + info(f"Query: \"{query}\"") + print() + + await discover(client) + await ping(client, query) + await stream(client, query) + await push(client, query) + await status(client) + + section("ALL PATTERNS COMPLETE") + ok("End-to-end A2A test finished!") + + +# ── Interactive Menu ────────────────────────────────────────────────────────── + +async def interactive(): + """Interactive loop for testing.""" + banner() + + async with httpx.AsyncClient(timeout=TIMEOUT) as client: + # Quick connectivity check + try: + resp = await client.get(f"{ADK_AGENT_URL}/.well-known/agent-card.json") + resp.raise_for_status() + card = resp.json() + ok(f"ADK agent connected: {card['name']}") + except Exception as e: + err(f"Cannot reach ADK agent at {ADK_AGENT_URL}: {e}") + err("Start the ADK agent first: cd adk-agent && poetry run python run_a2a.py") + return + + try: + resp = await client.get(M365_WEBHOOK_URL) + ok(f"M365 webhook reachable") + except Exception: + warn(f"M365 webhook not reachable at {M365_WEBHOOK_URL} (push mode won't store notifications)") + + print(f"\n{C.BOLD}Commands:{C.END}") + print(f" {C.CYAN}discover{C.END} — Show agent card") + print(f" {C.CYAN}ping [cat]{C.END} — Synchronous message/send") + print(f" {C.CYAN}stream [cat]{C.END} — SSE message/stream") + print(f" {C.CYAN}push [cat]{C.END} — Push notification + webhook") + print(f" {C.CYAN}status{C.END} — Check webhook notifications") + print(f" {C.CYAN}all [cat]{C.END} — Run all patterns") + print(f" {C.CYAN}quit{C.END} — Exit") + print() + + while True: + try: + raw = input(f"{C.BOLD}a2a>{C.END} ").strip() + except (EOFError, KeyboardInterrupt): + print("\nBye!") + break + + if not raw: + continue + + parts = raw.split(maxsplit=1) + cmd = parts[0].lower() + arg = parts[1] if len(parts) > 1 else "" + + if cmd in ("quit", "exit", "q"): + print("Bye!") + break + elif cmd == "discover": + await discover(client) + elif cmd == "ping": + if not arg: + warn("Usage: ping [category]") + else: + await ping(client, arg) + elif cmd == "stream": + if not arg: + warn("Usage: stream [category]") + else: + await stream(client, arg) + elif cmd == "push": + if not arg: + warn("Usage: push [category]") + else: + await push(client, arg) + elif cmd == "status": + await status(client) + elif cmd == "all": + if not arg: + warn("Usage: all [category]") + else: + await run_all(client, arg) + else: + warn(f"Unknown command: {cmd}") + info("Type 'help' or see commands above") + + +# ── CLI Entry Point ─────────────────────────────────────────────────────────── + +async def main(): + if len(sys.argv) < 2: + await interactive() + return + + cmd = sys.argv[1].lower() + arg = " ".join(sys.argv[2:]) if len(sys.argv) > 2 else "" + + async with httpx.AsyncClient(timeout=TIMEOUT) as client: + banner() + if cmd == "discover": + await discover(client) + elif cmd == "ping": + await ping(client, arg or "Nike socks") + elif cmd == "stream": + await stream(client, arg or "Nike socks") + elif cmd == "push": + await push(client, arg or "Nike socks") + elif cmd == "status": + await status(client) + elif cmd == "all": + await run_all(client, arg or "Nike socks") + else: + err(f"Unknown command: {cmd}") + print("Commands: discover, ping, stream, push, status, all") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/env.TEMPLATE b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/env.TEMPLATE new file mode 100644 index 00000000..f5d38b7f --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/env.TEMPLATE @@ -0,0 +1,22 @@ +# A2A Client Agent SDK Auth (leave empty for anonymous/local mode) +CONNECTIONS__SERVICE_CONNECTION__SETTINGS__CLIENTID= +CONNECTIONS__SERVICE_CONNECTION__SETTINGS__CLIENTSECRET= +CONNECTIONS__SERVICE_CONNECTION__SETTINGS__TENANTID= + +# A2A Remote Agent (ADK Brand Search Optimization Agent) +A2A_AGENT_URL=http://localhost:8080 + +# Webhook receiver for push notifications (this agent's own URL) +AGENT_HOST=localhost +AGENT_PORT=3978 + +# Azure AI Foundry (LLM Orchestration) +# Your Azure AI Services endpoint (NOT the project endpoint) +# e.g., https://your-resource.services.ai.azure.com +AZURE_AI_FOUNDRY_ENDPOINT= + +# API key for Azure AI Services +AZURE_AI_FOUNDRY_API_KEY= + +# Model deployment name in your Azure AI project +AZURE_AI_FOUNDRY_MODEL=gpt-4o-mini diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/requirements.txt b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/requirements.txt new file mode 100644 index 00000000..60bb0ffc --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/requirements.txt @@ -0,0 +1,16 @@ +# M365 Agents SDK +python-dotenv +aiohttp +microsoft-agents-hosting-aiohttp +microsoft-agents-hosting-core +microsoft-agents-authentication-msal +microsoft-agents-activity + +# A2A client +httpx +httpx-sse + +# Azure OpenAI (LLM orchestration via Semantic Kernel) +semantic-kernel>=1.40.0 +openai>=1.30.0 +pydantic>=2.0,<2.12 diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/run_server.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/run_server.py new file mode 100644 index 00000000..961b016b --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/run_server.py @@ -0,0 +1,87 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. + +""" +Entry point for the Brand Intelligence Advisor agent. + +Usage: + python run_server.py + python -m brand_intelligence_advisor (if __main__.py added) + +Starts the aiohttp server with M365 SDK messaging and A2A webhook endpoints. +""" + +import logging + +# ── Logging Setup ───────────────────────────────────────────────────────────── +# Configure loggers for each component so we can control verbosity per module. + +logging.basicConfig(level=logging.INFO, format="%(name)s %(levelname)s: %(message)s") + +# M365 Agents SDK logging +ms_agents_logger = logging.getLogger("microsoft_agents") +ms_agents_logger.setLevel(logging.INFO) + +# A2A client logging (DEBUG shows full JSON-RPC payloads) +a2a_logger = logging.getLogger("brand_intelligence_advisor.tools.a2a_client") +a2a_logger.setLevel(logging.DEBUG) + +# Brand advisor logging +advisor_logger = logging.getLogger("brand_intelligence_advisor.tools.brand_advisor") +advisor_logger.setLevel(logging.INFO) + +# Server logging +server_logger = logging.getLogger("brand_intelligence_advisor.server") +server_logger.setLevel(logging.INFO) + +# Agent logging +agent_logger = logging.getLogger("brand_intelligence_advisor.agent") +agent_logger.setLevel(logging.INFO) + +# Orchestrator logging +orchestrator_logger = logging.getLogger("brand_intelligence_advisor.orchestrator") +orchestrator_logger.setLevel(logging.INFO) + +# Azure SDKs logging (reduce noise from Azure HTTP pipeline) +azure_logger = logging.getLogger("azure") +azure_logger.setLevel(logging.WARNING) + +# ── Import & Start ──────────────────────────────────────────────────────────── + +from brand_intelligence_advisor.agent import ( # noqa: E402 + AGENT_APP, + CONNECTION_MANAGER, + push_notifications, + LLM_AVAILABLE, +) +from brand_intelligence_advisor.server import start_server # noqa: E402 + +llm_status = ( + "ENABLED (Semantic Kernel + Azure OpenAI)" + if LLM_AVAILABLE + else "DISABLED (regex fallback)" +) + +print( + f""" ++------------------------------------------------------------------+ +| Brand Intelligence Advisor | +| M365 Agents SDK + Semantic Kernel + A2A Protocol | +| | +| LLM Orchestration: {llm_status:<44}| +| | +| A2A Patterns: | +| a. Ping (message/send) -- Synchronous blocking | +| b. SSE (message/stream) -- Server-Sent Events | +| c. Push (webhook notify) -- Async with callback | +| d. Status (webhook receive) -- View push notifications | +| | ++------------------------------------------------------------------+ +""" +) + +start_server( + agent_application=AGENT_APP, + auth_configuration=CONNECTION_MANAGER.get_default_connection_configuration(), + push_notifications=push_notifications, +) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/test_demo.py b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/test_demo.py new file mode 100644 index 00000000..12c3e306 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/a2a-client-agent/test_demo.py @@ -0,0 +1,540 @@ +""" +A2A Pattern Test Runner — Interactive CLI + +Runs A2A communication patterns and makes each pattern FEEL distinct: + - Ping: Wait for full orchestrator-synthesized response (like an API call) + - Stream: See raw ADK analysis text appear live as SSE chunks arrive + - Push: Returns immediately, runs in background, type "check" to see results + +Usage: + python test_demo.py Interactive mode — type your own queries + python test_demo.py auto Run all hardcoded tests automatically + python test_demo.py ping Run only the ping test + python test_demo.py stream Run only the stream test + python test_demo.py compare Run the comparison test + python test_demo.py glossary Run the glossary test + python test_demo.py discover Run agent discovery test +""" + +import asyncio +import json +import logging +import sys +import time + +# Suppress all internal logs so only test output is visible +logging.disable(logging.CRITICAL) + +import httpx +from dotenv import load_dotenv + +load_dotenv() + +from brand_intelligence_advisor.orchestrator import AgentOrchestrator +from brand_intelligence_advisor.tools.a2a_client import A2AClient +from brand_intelligence_advisor.tools.brand_advisor import BrandAdvisor + + +# ── Config ──────────────────────────────────────────────────────────────────── + +ADK_AGENT_URL = "http://localhost:8080" +WEBHOOK_URL = "http://localhost:3978/a2a/webhook" + + +# ── Display Helpers ─────────────────────────────────────────────────────────── + +def header(title: str): + print() + print("=" * 60) + print(f" {title}") + print("=" * 60) + + +def section(title: str): + print() + print(f"--- {title} ---") + print() + + +def result(label: str, passed: bool, detail: str = ""): + status = "PASS" if passed else "FAIL" + mark = "[+]" if passed else "[X]" + line = f" {mark} {label}: {status}" + if detail: + line += f" ({detail})" + print(line) + + +def show_response(text: str, max_lines: int = 15): + """Show a truncated response for demo readability.""" + lines = text.strip().split("\n") + for line in lines[:max_lines]: + print(f" {line}") + if len(lines) > max_lines: + print(f" ... ({len(lines) - max_lines} more lines)") + + +# ── Test Cases ──────────────────────────────────────────────────────────────── + +async def test_discover(): + """Test A2A Agent Discovery (/.well-known/agent-card.json).""" + section("Test: Agent Discovery (A2A Protocol)") + + print(" Discovering remote agent at:", ADK_AGENT_URL) + start = time.time() + + async with httpx.AsyncClient() as client: + r = await client.get(f"{ADK_AGENT_URL}/.well-known/agent-card.json") + + elapsed = time.time() - start + passed = r.status_code == 200 + + if passed: + card = r.json() + print(f" Agent Name : {card.get('name', 'N/A')}") + print(f" Description : {card.get('description', 'N/A')[:80]}") + print(f" URL : {card.get('url', 'N/A')}") + + caps = card.get("capabilities", {}) + print(f" Streaming : {caps.get('streaming', False)}") + print(f" Push Notify : {caps.get('pushNotifications', False)}") + + result("Agent Discovery", passed, f"{elapsed:.1f}s") + return passed + + +async def test_ping(orch: AgentOrchestrator): + """Test A2A Ping (message/send) via LLM orchestration.""" + section("Test: Ping - Quick Analysis (message/send)") + + query = "How is Nike performing in search optimization?" + print(f' Query: "{query}"') + print() + + start = time.time() + response = await orch.process_message(query, "test-ping") + elapsed = time.time() - start + + passed = len(response) > 20 + print(" Response:") + show_response(response) + + result("Ping (message/send)", passed, f"{elapsed:.1f}s") + return passed + + +async def test_stream(orch: AgentOrchestrator): + """Test A2A Stream (message/stream SSE) via LLM orchestration.""" + section("Test: Stream - Detailed Report (message/stream SSE)") + + query = "Give me a detailed report on Adidas shoes performance" + print(f' Query: "{query}"') + print() + + start = time.time() + response = await orch.process_message(query, "test-stream") + elapsed = time.time() - start + + passed = len(response) > 50 + print(" Response:") + show_response(response) + + result("Stream (message/stream)", passed, f"{elapsed:.1f}s") + return passed + + +async def test_compare(orch: AgentOrchestrator): + """Test multi-tool comparison (LLM calls analyze_brand twice).""" + section("Test: Comparison - Multi-Tool (Nike vs Adidas)") + + query = "Compare Nike vs Adidas in sportswear" + print(f' Query: "{query}"') + print() + + start = time.time() + response = await orch.process_message(query, "test-compare") + elapsed = time.time() - start + + passed = len(response) > 50 + print(" Response:") + show_response(response) + + result("Comparison (multi-tool)", passed, f"{elapsed:.1f}s") + return passed + + +async def test_glossary(orch: AgentOrchestrator): + """Test local tool (SEO glossary, no A2A call).""" + section("Test: SEO Glossary - Local Tool (no A2A call)") + + query = "What is brand visibility?" + print(f' Query: "{query}"') + print() + + start = time.time() + response = await orch.process_message(query, "test-glossary") + elapsed = time.time() - start + + passed = len(response) > 20 + print(" Response:") + show_response(response) + + result("Glossary (local tool)", passed, f"{elapsed:.1f}s") + return passed + + +# ── Runner ──────────────────────────────────────────────────────────────────── + +TESTS = { + "discover": test_discover, + "ping": test_ping, + "stream": test_stream, + "compare": test_compare, + "glossary": test_glossary, +} + + +# ── Transmission Mode Helpers ───────────────────────────────────────────────── + +MODE_LABELS = {"1": "ping", "2": "stream", "3": "push", "4": "auto"} + + +def prompt_mode() -> str: + """Show mode menu and return mode label.""" + print() + print(" Choose A2A transmission pattern:") + print(" [1] Ping - synchronous request/response (wait for full result)") + print(" [2] Stream - SSE live typing (see text arrive chunk by chunk)") + print(" [3] Push - fire & forget (returns immediately, check later)") + print(" [4] Auto - let the LLM decide (default)") + try: + choice = input(" Mode [1/2/3/4, default=4]: ").strip() + except (EOFError, KeyboardInterrupt): + return "auto" + if choice not in MODE_LABELS: + choice = "4" + return MODE_LABELS[choice] + + +# ── Interactive Mode ────────────────────────────────────────────────────────── + +async def interactive_mode(): + """Interactive mode — each A2A pattern feels genuinely different.""" + header("A2A Interactive Test Runner") + print(" Orchestrator : Semantic Kernel + Azure OpenAI") + print(" Remote Agent : Google ADK (A2A Protocol v0.3)") + print(" Framework : Microsoft 365 Agents SDK") + print() + print(" Each pattern delivers a DIFFERENT experience:") + print(" Ping -> Wait, then get full orchestrator strategic synthesis") + print(" Stream -> See raw ADK analysis appear live (SSE chunks)") + print(" Push -> Returns instantly, type 'check' for results later") + print() + print(" Examples:") + print(' > How is Nike doing in Active category?') + print(' > Analyze Adidas in sportswear') + print(' > What is brand visibility?') + print() + print(" Commands: 'check' (push results), 'discover', 'quit'") + print("=" * 60) + + # Check ADK agent + try: + async with httpx.AsyncClient(timeout=5) as c: + r = await c.get(f"{ADK_AGENT_URL}/.well-known/agent-card.json") + assert r.status_code == 200 + print(" [OK] ADK agent is running on port 8080") + except Exception: + print(" [!!] ADK agent is NOT running on port 8080") + print(" Start it first: poetry run python run_a2a.py") + sys.exit(1) + + # Init components + a2a_client = A2AClient(ADK_AGENT_URL) + advisor = BrandAdvisor() + push_jobs: dict[str, dict] = {} # task_id → {query, status, result, start_time} + + try: + orch = AgentOrchestrator( + a2a_client=a2a_client, + advisor=advisor, + push_notifications=[], + webhook_url=WEBHOOK_URL, + ) + print(" [OK] LLM orchestrator initialized") + except Exception as e: + print(f" [!!] Could not initialize orchestrator: {e}") + sys.exit(1) + + print() + + # ── Background push completion checker ── + async def _run_push_job(query_text: str, a2a_request: str, job_id: str): + """Run the A2A call in background and store result when done.""" + try: + task = await a2a_client.send_message(a2a_request) + push_jobs[job_id]["status"] = "completed" + push_jobs[job_id]["result"] = task.message or f"Task {task.status}" + push_jobs[job_id]["elapsed"] = time.time() - push_jobs[job_id]["start_time"] + # Notify user if they're at the prompt + print(f"\n ** [PUSH] Job '{query_text[:40]}' completed! Type 'check' to see results. **") + print("You> ", end="", flush=True) + except Exception as e: + push_jobs[job_id]["status"] = "failed" + push_jobs[job_id]["result"] = f"Error: {e}" + + # Use run_in_executor so input() doesn't block the event loop + # (needed for background push tasks to actually run) + loop = asyncio.get_event_loop() + + async def async_input(prompt: str) -> str: + return await loop.run_in_executor(None, input, prompt) + + while True: + try: + query = await async_input("You> ") + except (EOFError, KeyboardInterrupt): + print("\n\nGoodbye!") + break + + query = query.strip() + if not query: + continue + if query.lower() in ("quit", "exit", "q"): + print("\nGoodbye!") + break + + # ── Special commands ── + if query.lower() == "discover": + await test_discover() + continue + + if query.lower() == "check": + # Show push job results + if not push_jobs: + print(" No push jobs submitted yet.\n") + continue + print() + for job_id, job in push_jobs.items(): + status_icon = {"completed": "+", "working": "~", "failed": "X"}.get(job["status"], "?") + print(f" [{status_icon}] {job['query'][:50]}") + print(f" Status: {job['status']}") + if job["status"] == "completed": + elapsed = job.get("elapsed", 0) + print(f" Completed in: {elapsed:.1f}s") + print(f" Result ({len(job['result'])} chars):") + for line in job["result"].strip().split("\n")[:20]: + print(f" {line}") + remaining = len(job["result"].strip().split("\n")) - 20 + if remaining > 0: + print(f" ... ({remaining} more lines)") + elif job["status"] == "working": + elapsed = time.time() - job["start_time"] + print(f" Running for: {elapsed:.0f}s") + print() + continue + + # ── Mode selection ── + # (quick prompt — ok to block briefly) + mode = prompt_mode() + + # ── Parse query for stream/push (need brand + category) ── + parsed = advisor.parse_query(query) + a2a_request = advisor.formulate_a2a_request(parsed) if parsed.is_valid else None + + # ================================================================ + # PING — Full SK orchestration (understand -> A2A -> synthesize) + # ================================================================ + if mode in ("ping", "auto"): + hint = "You MUST use mode='ping' (synchronous message/send) for this request." if mode == "ping" else "" + message = f"{query}\n\n[INSTRUCTION: {hint}]" if hint else query + + print() + print(f" [PING] Sending to SK orchestrator...") + print(f" [PING] Waiting for complete response...") + print() + start = time.time() + try: + response = await orch.process_message(message, "interactive") + elapsed = time.time() - start + print(f"Advisor ({elapsed:.1f}s) [pattern: ping]:") + print() + for line in response.strip().split("\n"): + print(f" {line}") + print() + except Exception as e: + print(f" Error: {e}\n") + + # ================================================================ + # STREAM — Raw SSE chunks printed live as they arrive + # ================================================================ + elif mode == "stream": + if not a2a_request: + # If we can't parse brand/category, fall back to orchestrator + print(f" [STREAM] Could not parse brand from query, using orchestrator...") + response = await orch.process_message(query, "interactive") + print(f"\n {response}\n") + continue + + print() + print(f" [STREAM] Opening SSE connection to ADK agent...") + print(f" [STREAM] Request: {a2a_request}") + print(f" [STREAM] Chunks will appear as they arrive from the server:") + print() + print("-" * 60) + + start = time.time() + chunk_count = 0 + try: + async for event in a2a_client.stream_message(a2a_request): + chunk_count += 1 + if event.text: + # Print each chunk immediately as it arrives — live typing! + sys.stdout.write(event.text) + sys.stdout.flush() + elif event.event_type: + # Show status events + status = event.data.get("result", {}).get("status", {}).get("state", "") + if status: + sys.stdout.write(f"\n [SSE event: {status}]") + sys.stdout.flush() + + elapsed = time.time() - start + print() + print("-" * 60) + print(f" [STREAM] Done — {chunk_count} SSE events in {elapsed:.1f}s") + print() + except Exception as e: + print(f"\n Stream error: {e}\n") + + # ================================================================ + # PUSH — Fire & forget, runs in background, check later + # ================================================================ + elif mode == "push": + if not a2a_request: + print(f" [PUSH] Could not parse brand from query, using orchestrator...") + response = await orch.process_message(query, "interactive") + print(f"\n {response}\n") + continue + + job_id = f"push-{len(push_jobs) + 1}" + push_jobs[job_id] = { + "query": query, + "status": "working", + "result": None, + "start_time": time.time(), + } + + # Fire off the A2A call in the background — don't wait! + asyncio.create_task(_run_push_job(query, a2a_request, job_id)) + + print() + print(f" [PUSH] Job submitted! ID: {job_id}") + print(f" [PUSH] Request: {a2a_request}") + print(f" [PUSH] Running in background — you can keep typing!") + print(f" [PUSH] Type 'check' when you want to see results.") + print() + print(f" Try asking something else while you wait:") + print(f" > What is brand visibility?") + print(f" > How is Puma doing in Active category? (use ping)") + print() + + +async def run_all(selected: list[str]): + header("A2A Pattern Test Runner") + print(" Orchestrator : Semantic Kernel + Azure OpenAI") + print(" Remote Agent : Google ADK (A2A Protocol v0.3)") + print(" Framework : Microsoft 365 Agents SDK") + + # Check ADK agent is running + try: + async with httpx.AsyncClient(timeout=5) as c: + r = await c.get(f"{ADK_AGENT_URL}/.well-known/agent-card.json") + assert r.status_code == 200 + except Exception: + print("\n ERROR: ADK agent is not running on port 8080.") + print(" Start it first: poetry run python run_a2a.py") + sys.exit(1) + + # Initialize orchestrator (only needed for LLM tests) + orch = None + needs_orch = any(t != "discover" for t in selected) + if needs_orch: + try: + orch = AgentOrchestrator( + a2a_client=A2AClient(ADK_AGENT_URL), + advisor=BrandAdvisor(), + push_notifications=[], + webhook_url=WEBHOOK_URL, + ) + except Exception as e: + print(f"\n ERROR: Could not initialize orchestrator: {e}") + sys.exit(1) + + # Run tests + results = {} + total_start = time.time() + + for name in selected: + test_fn = TESTS[name] + try: + if name == "discover": + passed = await test_fn() + else: + passed = await test_fn(orch) + results[name] = passed + except Exception as e: + print(f"\n ERROR in {name}: {e}") + results[name] = False + + total_elapsed = time.time() - total_start + + # Summary + header("Test Summary") + passed_count = sum(1 for v in results.values() if v) + total_count = len(results) + + for name, passed in results.items(): + result(name.capitalize(), passed) + + print() + print(f" {passed_count}/{total_count} passed in {total_elapsed:.1f}s") + print() + + return all(results.values()) + + +def main(): + args = sys.argv[1:] + + if args and args[0] in ("--help", "-h"): + print(__doc__) + sys.exit(0) + + # Default: interactive mode (no args) + if not args: + asyncio.run(interactive_mode()) + return + + # "auto" runs all hardcoded tests + if args == ["auto"]: + selected = list(TESTS.keys()) + success = asyncio.run(run_all(selected)) + sys.exit(0 if success else 1) + + # Specific test names + selected = [] + for a in args: + if a.lower() in TESTS: + selected.append(a.lower()) + else: + print(f"Unknown test: {a}") + print(f"Available: {', '.join(TESTS.keys())}, auto") + sys.exit(1) + + success = asyncio.run(run_all(selected)) + sys.exit(0 if success else 1) + + +if __name__ == "__main__": + main() diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/Dockerfile b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/Dockerfile new file mode 100644 index 00000000..5d45019b --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/Dockerfile @@ -0,0 +1,64 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Use Python 3.11 slim image +FROM python:3.11-slim + +# Set working directory +WORKDIR /app + +# Install system dependencies including Chrome for web scraping +RUN apt-get update && apt-get install -y \ + build-essential \ + curl \ + wget \ + gnupg \ + && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \ + && echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \ + && apt-get update \ + && apt-get install -y google-chrome-stable \ + && rm -rf /var/lib/apt/lists/* + +# Copy dependency files +COPY pyproject.toml ./ + +# Install Poetry +RUN pip install --no-cache-dir poetry + +# Configure Poetry to not create virtual env (we're in a container) +RUN poetry config virtualenvs.create false + +# Install dependencies +RUN poetry install --no-interaction --no-ansi --no-root + +# Copy application code +COPY . . + +# Install the application +RUN poetry install --no-interaction --no-ansi + +# Expose port 8080 (Cloud Run default) +EXPOSE 8080 + +# Set environment variables +ENV PORT=8080 +ENV DISABLE_WEB_DRIVER=0 + +# Health check +HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \ + CMD curl -f http://localhost:8080/health || exit 1 + +# Run A2A server using run_a2a.py +CMD ["python", "run_a2a.py"] + diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/README.md b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/README.md new file mode 100644 index 00000000..b8d8a731 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/README.md @@ -0,0 +1,355 @@ +# ADK Agent - Brand Search Optimization + +**A2A Producer** built with Google ADK (Agent Development Kit) + +--- + +## 🚀 Quick Start + +### Prerequisites +- Python 3.11+ +- Poetry (dependency management) +- Google Cloud Project ID +- Gemini API key from [AI Studio](https://aistudio.google.com/apikey) + +### Installation + +```bash +# 1. Clone and navigate +cd adk-agent + +# 2. Install dependencies +poetry install + +# 3. Configure environment +cp ../env.example .env +# Edit .env with your credentials +``` + +### Environment Variables + +Create `.env` file: +```bash +GOOGLE_CLOUD_PROJECT=your-project-id # Required - For BigQuery +GOOGLE_API_KEY=your-api-key # Required - From AI Studio +MODEL=gemini-2.0-flash # Forever Free tier +GOOGLE_GENAI_USE_VERTEXAI=0 # Use ML Dev API (not Vertex AI) + +# Optional +SERPAPI_KEY=your-serpapi-key # 100 free searches/month +DISABLE_WEB_DRIVER=0 # Set to 1 to disable web scraping +``` + +### Run CLI Mode + +```bash +poetry run adk run brand_search_optimization + +# Example interaction: +> Nike +[Agent displays categories: Active, Socks, Swim...] +> Active +[Agent extracts keywords...] +> continue +[Agent searches competitors and provides SEO recommendations] +``` + +### Run A2A Server Mode + +```bash +python run_a2a.py + +# Access endpoints: +# http://localhost:8080/.well-known/agent-card.json (Agent Card) +# http://localhost:8080/health (Health Check) +# http://localhost:8080/invoke (A2A Invoke) +``` + +Test the agent card: +```bash +curl http://localhost:8080/.well-known/agent-card.json +``` + +--- + +## 📁 Project Structure + +``` +adk-agent/ +├── brand_search_optimization/ # Main agent code +│ ├── agent.py # Root agent orchestration +│ ├── prompt.py # System prompts +│ ├── sub_agents/ # Sub-agent implementations +│ │ ├── keyword_finding/ # Keyword extraction +│ │ ├── search_results/ # Competitor intelligence +│ │ └── comparison/ # SEO analysis +│ ├── tools/ # Tool implementations +│ │ ├── bq_connector.py # BigQuery integration +│ │ └── serp_connector.py # SerpAPI integration +│ └── shared_libraries/ +│ └── constants.py # Configuration constants +├── deployment/ # Deployment scripts +│ ├── run.sh # Local run script +│ ├── eval.sh # Evaluation script +│ └── deploy.py # Vertex AI deployment +├── eval/ # Evaluation datasets +│ └── data/ +│ ├── eval_data1.evalset.json +│ └── test_config.json +├── tests/ # Unit tests +│ └── unit/ +│ └── test_tools.py +├── run_a2a.py # A2A server entry point +├── Dockerfile # Container image +├── pyproject.toml # Dependencies +└── README.md # This file +``` + +--- + +## 🔧 Configuration + +### BigQuery Setup + +The agent uses Google's **public dataset** - no setup required! + +Dataset: `bigquery-public-data.thelook_ecommerce.products` + +**Brands available**: Nike, Adidas, Levi's, Calvin Klein, Columbia, Puma, Under Armour, Reebok, and 100+ more. + +### SerpAPI Setup (Optional) + +For production-quality competitor data: + +1. Sign up at [SerpAPI](https://serpapi.com/) +2. Get free API key (100 searches/month) +3. Add to `.env`: `SERPAPI_KEY=your_key_here` + +Without SerpAPI, the agent uses web scraping fallback (slower, may encounter bot detection). + +### Web Scraping Setup + +**Local (Windows ARM64)**: +- Uses Firefox (better ARM64 support) +- Install Firefox: `winget install Mozilla.Firefox` + +**Cloud (Linux x86_64)**: +- Uses Chrome (standard in containers) +- Dockerfile includes Chrome setup +- Set `DISABLE_WEB_DRIVER=1` for Cloud Run serverless + +--- + +## 🚢 Deployment + +### Option 1: Local Development + +```bash +# CLI mode +poetry run adk run brand_search_optimization + +# A2A server mode +python run_a2a.py +``` + +### Option 2: Docker + +```bash +# Build +docker build -t brand-search-optimization . + +# Run +docker run -p 8080:8080 \ + -e GOOGLE_API_KEY=$GOOGLE_API_KEY \ + -e GOOGLE_CLOUD_PROJECT=$GOOGLE_CLOUD_PROJECT \ + -e SERPAPI_KEY=$SERPAPI_KEY \ + brand-search-optimization + +# Test +curl http://localhost:8080/.well-known/agent-card.json +``` + +### Option 3: Google Cloud Run + +```bash +# Deploy from source +gcloud run deploy brand-search-agent \ + --source . \ + --region us-central1 \ + --allow-unauthenticated \ + --set-env-vars GOOGLE_CLOUD_PROJECT=$GOOGLE_CLOUD_PROJECT,\ +MODEL=gemini-2.0-flash,\ +DISABLE_WEB_DRIVER=1 + +# Get URL +gcloud run services describe brand-search-agent \ + --region us-central1 \ + --format 'value(status.url)' +``` + +### Option 4: Vertex AI Agent Engine + +```bash +# Set staging bucket +export STAGING_BUCKET=your-bucket-name + +# Deploy +adk deploy brand_search_optimization \ + --project your-project \ + --location us-central1 + +# Expose as A2A +adk a2a expose brand_search_optimization \ + --project your-project \ + --location us-central1 +``` + +See [A2A_DEPLOYMENT.md](../docs/ARCHITECTURE.md) for deployment architecture details. + +--- + +## 🧪 Testing + +### Unit Tests + +```bash +poetry run pytest tests/unit/test_tools.py -v +``` + +### Integration Test + +```bash +poetry run adk run brand_search_optimization + +# Test workflow: +> Nike +[Verify categories displayed] +> Active +[Verify keywords extracted] +> continue +[Verify competitor data fetched] +> continue +[Verify SEO recommendations generated] +``` + +### Evaluation + +```bash +adk eval brand_search_optimization \ + eval/data/eval_data1.evalset.json \ + --config_file_path eval/data/test_config.json +``` + +### A2A Endpoint Test + +```bash +# Start server +python run_a2a.py + +# In another terminal: +curl http://localhost:8080/health +curl http://localhost:8080/.well-known/agent-card.json + +# Test invoke +curl -X POST http://localhost:8080/invoke \ + -H "Content-Type: application/json" \ + -d '{"message": "I want to optimize Nike products"}' +``` + +--- + +## 🔍 Troubleshooting + +### Import Errors + +```bash +# Ensure you're in the adk-agent directory +cd adk-agent + +# Reinstall dependencies +poetry install --no-cache +``` + +### BigQuery Permission Errors + +```bash +# Authenticate with Google Cloud +gcloud auth application-default login + +# Verify project ID +gcloud config get-value project +``` + +### Web Scraping Failures + +1. **Enable SerpAPI**: Add `SERPAPI_KEY` to `.env` (preferred solution) +2. **Check Firefox**: Ensure Firefox installed for local development +3. **Disable scraping**: Set `DISABLE_WEB_DRIVER=1` for serverless deployments + +### Gemini API Rate Limits + +- Free tier: 1500 requests/day +- Monitor usage in [Google AI Studio](https://aistudio.google.com/) +- Consider paid tier for production: $0.075/1M tokens + +### Debug Logs + +```bash +# Local +tail -f C:\Users\\AppData\Local\Temp\agents_log\agent.*.log + +# Cloud Run +gcloud run logs tail brand-search-agent --region us-central1 +``` + +--- + +## 📊 Performance Metrics + +### Typical Execution Times +- Category selection: ~2 seconds +- Keyword extraction: ~5 seconds +- Competitor search (SerpAPI): ~3 seconds +- Competitor search (web scraping): ~15-30 seconds +- SEO analysis: ~10 seconds + +**Total**: 25-50 seconds per workflow + +### Token Usage +- Category selection: ~500 tokens +- Keyword extraction: ~2,000 tokens +- Search results: ~1,500 tokens +- Comparison report: ~3,000 tokens + +**Total**: ~7,000 tokens per audit (free tier supports 200+ audits/day) + +--- + +## 🎨 Customization + +See [../customization.md](../docs/ARCHITECTURE.md) for architecture details and extension patterns. + +--- + +## Documentation + +- **[Root README](../README.md)** — Overall A2A reference implementation +- **[Client Agent README](../a2a-client-agent/README.md)** — M365 SDK + Semantic Kernel consumer +- **[A2A Patterns](../docs/A2A_PATTERNS.md)** — Protocol deep dive with sequence diagrams +- **[Architecture](../docs/ARCHITECTURE.md)** — System design details + +--- + +## 🔗 Resources + +- [Google ADK Documentation](https://google.github.io/adk-docs/) +- [A2A Protocol](https://a2a-protocol.org/) +- [BigQuery Public Datasets](https://cloud.google.com/bigquery/public-data) +- [SerpAPI Documentation](https://serpapi.com/docs) +- [Gemini API](https://ai.google.dev/gemini-api/docs) + +--- + +## 📄 License + +Apache 2.0 - See ../LICENSE file diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/__init__.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/__init__.py new file mode 100644 index 00000000..c48963cd --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/__init__.py @@ -0,0 +1,15 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from . import agent diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/agent.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/agent.py new file mode 100644 index 00000000..b2112385 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/agent.py @@ -0,0 +1,34 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Defines Brand Search Optimization Agent""" + +from google.adk.agents.llm_agent import Agent + +from . import prompt +from .shared_libraries import constants +from .tools import bq_connector +from .sub_agents.search_results.agent import extract_google_shopping_products + +root_agent = Agent( + model=constants.MODEL, + name=constants.AGENT_NAME, + description=constants.DESCRIPTION, + instruction=prompt.ROOT_PROMPT, + tools=[ + bq_connector.get_categories_for_brand, + bq_connector.get_product_details_for_brand, + extract_google_shopping_products, + ], +) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/prompt.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/prompt.py new file mode 100644 index 00000000..b478b816 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/prompt.py @@ -0,0 +1,86 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Defines the prompts in the brand search optimization agent.""" + +ROOT_PROMPT = """ + You are a brand search optimization assistant that performs competitive SEO analysis. + You have direct access to data tools — use them to gather data and produce a comprehensive report. + + **STEP 1: Parse the Input** + + Examine the user's message to extract brand name and category: + - If BOTH brand AND category are provided (e.g., "Analyze Nike in Active category", + "Nike shoes", "Compare Adidas sportswear"): + → Proceed directly to Step 2. + - If ONLY brand is provided (e.g., "Analyze Nike"): + → Call `get_categories_for_brand(brand="[brand_name]")` to show categories. + → Ask: "Which category would you like to analyze?" + → Wait for response, then proceed to Step 2. + - If NEITHER is provided: + → Ask: "What brand would you like to analyze?" + → Wait for response, then show categories and ask for selection. + + **STEP 2: Get Brand Product Data** + + Call `get_product_details_for_brand(brand="[BRAND]", category="[CATEGORY]")` to retrieve + the brand's product titles in that category. Analyze the product titles to identify + the best keyword for competitor research: + - Remove the brand name from keywords + - Focus on product types, features, and use cases + - Pick the most specific but broadly-relevant keyword + - Store the top keyword and the list of brand product titles + + **STEP 3: Get Competitor Data** + + Call `extract_google_shopping_products(keyword="[TOP_KEYWORD]")` to find competitor + products for the chosen keyword. Store the competitor product titles. + + **STEP 4: Generate Comprehensive Report** + + Using ALL the data you've collected, generate a single comprehensive report with these sections: + + ## Brand Search Optimization Report: [BRAND] - [CATEGORY] + + ### 1. Keyword Analysis + - Top keyword: [keyword] + - Brand products found: [count] + - Product title table (from Step 2 data) + + ### 2. Competitor Landscape + - Competitor products found for "[keyword]" + - List of competitor product titles and prices (from Step 3 data) + + ### 3. SEO Comparison + Compare brand titles vs competitor titles: + | Brand Product Title | Competitor Product Title | Key Differences | + |---|---|---| + + ### 4. Ranking Factor Analysis + - **Keyword Placement**: Where does the target keyword appear in titles? + - **Specificity**: Generic vs specific product attributes + - **Why Competitors May Rank Higher**: Brand strength, attributes, social proof + + ### 5. Actionable Recommendations + For each brand product, provide specific recommendations with SEO benefit explanations: + 1. [Recommendation] - WHY: [SEO benefit] + 2. [Recommendation] - WHY: [SEO benefit] + + **CRITICAL RULES:** + - When brand and category are in the message, call tools and produce the full report automatically. + - Do NOT ask for confirmation between steps — run the complete pipeline. + - Do NOT suggest adding competitor brand names to brand titles. + - Every recommendation MUST explain the SEO benefit. + - If a tool returns an error, note it in the report and continue with available data. +""" diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/shared_libraries/constants.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/shared_libraries/constants.py new file mode 100644 index 00000000..a72c9a7c --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/shared_libraries/constants.py @@ -0,0 +1,36 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Defines constants.""" + +import os + +import dotenv + +dotenv.load_dotenv() + +AGENT_NAME = "brand_search_optimization" +DESCRIPTION = "A helpful assistant for brand search optimization." +PROJECT = os.getenv("GOOGLE_CLOUD_PROJECT", "EMPTY") +LOCATION = os.getenv("GOOGLE_CLOUD_LOCATION", "global") +# Default to gemini-2.0-flash for zero-billing (Google Forever Free tier) +MODEL = os.getenv("MODEL", "gemini-2.0-flash") + +# Public BigQuery dataset - no user configuration needed! +# This is Google's public e-commerce dataset with 100,000+ products +PUBLIC_DATASET = "bigquery-public-data.thelook_ecommerce.products" + +DISABLE_WEB_DRIVER = int(os.getenv("DISABLE_WEB_DRIVER", "0")) +WHL_FILE_NAME = os.getenv("ADK_WHL_FILE", "") +STAGING_BUCKET = os.getenv("STAGING_BUCKET", "") diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/__init__.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/__init__.py new file mode 100644 index 00000000..0a2669d7 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/comparison/agent.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/comparison/agent.py new file mode 100644 index 00000000..7ba0629d --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/comparison/agent.py @@ -0,0 +1,40 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.adk.agents.llm_agent import Agent + +from ...shared_libraries import constants +from . import prompt + +comparison_generator_agent = Agent( + model=constants.MODEL, + name="comparison_generator_agent", + description="A helpful agent to generate comparison.", + instruction=prompt.COMPARISON_AGENT_PROMPT, +) + +comparsion_critic_agent = Agent( + model=constants.MODEL, + name="comparison_critic_agent", + description="A helpful agent to critique comparison.", + instruction=prompt.COMPARISON_CRITIC_AGENT_PROMPT, +) + +comparison_root_agent = Agent( + model=constants.MODEL, + name="comparison_root_agent", + description="A helpful agent to compare titles", + instruction=prompt.COMPARISON_ROOT_AGENT_PROMPT, + sub_agents=[comparison_generator_agent, comparsion_critic_agent], +) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/comparison/prompt.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/comparison/prompt.py new file mode 100644 index 00000000..6e1f778b --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/comparison/prompt.py @@ -0,0 +1,144 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +COMPARISON_AGENT_PROMPT = """ + You are an SEO analyst specializing in Google Shopping optimization. + + **YOUR TASK: Immediately generate a comprehensive SEO comparison report when you receive product data.** + + You will receive: + - Brand name and category + - Target keyword + - Brand product titles + - Competitor product titles + + **DO NOT ask for confirmation or say "I'm ready" - IMMEDIATELY generate the full analysis report.** + + + When comparing product titles, analyze these dimensions: + + 1. SEARCH INTENT ANALYSIS + - What is the user searching for with the target keyword? + - What product attributes matter most for this search? + - Is the intent informational, navigational, or transactional? + + 2. KEYWORD PLACEMENT & STRATEGY + - Where does the target keyword appear in titles? (Beginning = stronger signal) + - Are competitors using exact match or variations? + - What modifiers enhance discoverability? (e.g., "premium", "bestseller", "new") + + 3. RANKING FACTORS (Why competitors rank higher) + - Brand recognition strength + - Product attribute specificity (Color, Size, Material, Model) + - Action words that drive clicks (Shop, Buy, Sale, Clearance) + - Social proof indicators (Popular, Top-Rated, Bestseller) + + 4. GAPS & OPPORTUNITIES + - What relevant keywords do competitors use that align with search intent? + - Focus on: product attributes, benefits, use cases that match search intent + - Avoid suggesting: competitor brand names, irrelevant attributes + + + + ### Search Intent Analysis + [What are users looking for when they search for the target keyword?] + + ### Competitor Title Comparison + | Brand Product Title | Competitor Product Title | Key Differences | + |---|---|---| + | [Title] | [Competitor Title] | [Why competitor may rank higher] | + + ### Ranking Factor Analysis + 1. **Keyword Placement**: [Analysis of where target keyword appears] + 2. **Specificity**: [Generic vs specific product attributes] + 3. **Why Competitors May Rank Higher**: [Brand strength, attributes, social proof] + + ### Actionable Recommendations + For each brand product, provide specific recommendations: + 1. [Recommendation] - WHY: [SEO benefit explanation] + 2. [Recommendation] - WHY: [SEO benefit explanation] + 3. [Recommendation] - WHY: [SEO benefit explanation] + + + + - DO NOT suggest adding competitor brand names to brand titles + - DO NOT suggest keywords unrelated to search intent + - Every recommendation MUST explain the SEO benefit + - Focus on product attributes that improve relevance for the search keyword + - Avoid generic advice like "add more keywords" + - Consider: Would this recommendation help the product rank better for THIS specific keyword? + +""" + +COMPARISON_CRITIC_AGENT_PROMPT = """ + You are a senior SEO expert validating comparison reports. + + + Check if the analysis includes ALL of the following: + + 1. **Search Intent Explanation**: Does it explain what users are looking for with this keyword? + - NOT ACCEPTABLE: Just listing keywords + - ACCEPTABLE: Explaining user needs and search context + + 2. **Reasoning for Rankings**: Does it explain WHY competitors rank higher? + - NOT ACCEPTABLE: "They have more keywords" + - ACCEPTABLE: "They use specific product attributes that match search intent" + + 3. **Specific Recommendations**: Are recommendations actionable and specific? + - NOT ACCEPTABLE: "Add more keywords" + - ACCEPTABLE: "Add 'compression' attribute to title for better specificity" + + 4. **SEO Benefit Explanations**: Does each recommendation explain the benefit? + - NOT ACCEPTABLE: "Add 'lightweight' to title" + - ACCEPTABLE: "Add 'lightweight' to title - WHY: Matches search intent for performance running gear" + + 5. **No Competitor Brand Names**: Does it avoid suggesting to add competitor brands? + - NOT ACCEPTABLE: "Add 'Adidas' or 'Under Armour' keywords" + - ACCEPTABLE: Product attributes only + + 6. **No Generic Advice**: Does it avoid vague suggestions? + - NOT ACCEPTABLE: "Optimize your titles", "Add more details" + - ACCEPTABLE: Specific keyword additions with reasoning + + + + If ALL criteria are met, say: + "This comparison analysis is comprehensive and actionable. All SEO recommendations are specific and well-reasoned." + + If ANY criteria is missing, provide specific feedback: + - Missing Analysis: [What's missing and what should be added] + - Unclear Reasoning: [What needs clarification] + - Weak Recommendations: [What needs improvement] + - Generic Advice: [Which suggestions need to be more specific] + + Be thorough but constructive. Point out exactly what needs improvement. + +""" + +COMPARISON_ROOT_AGENT_PROMPT = """ + You are a routing agent for the comparison workflow. + + + 1. Route to `comparison_generator_agent` to generate SEO comparison analysis + 2. Route to `comparison_critic_agent` to validate the analysis quality + 3. Loop between these agents until the critic is satisfied + 4. Once satisfied, relay the final comparison report to the user + + + + The comparison_generator_agent will create a detailed SEO analysis. + The comparison_critic_agent will validate that the analysis is specific, actionable, and well-reasoned. + Loop until quality standards are met. + +""" diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/keyword_finding/agent.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/keyword_finding/agent.py new file mode 100644 index 00000000..6739285e --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/keyword_finding/agent.py @@ -0,0 +1,31 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Defines keyword finding agent.""" + +from google.adk.agents.llm_agent import Agent + +from ...shared_libraries import constants +from ...tools import bq_connector +from . import prompt + +keyword_finding_agent = Agent( + model=constants.MODEL, + name="keyword_finding_agent", + description="A helpful agent to find keywords", + instruction=prompt.KEYWORD_FINDING_AGENT_PROMPT, + tools=[ + bq_connector.get_product_details_for_brand, + ], +) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/keyword_finding/prompt.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/keyword_finding/prompt.py new file mode 100644 index 00000000..dc20a927 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/keyword_finding/prompt.py @@ -0,0 +1,64 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +KEYWORD_FINDING_AGENT_PROMPT = """ +You are a keyword research agent. Your job is to find high-value keywords for a brand in a specific category. + + +1. You will receive a request like "Find keywords for ASICS in Active category" +2. Call `get_product_details_for_brand` with the brand name and category as parameters +3. Analyze the product titles to identify keywords shoppers would use +4. **CRITICAL**: Extract keywords WITHOUT the brand name - focus on product types, features, and use cases +5. Group similar keywords and remove duplicates +6. Rank keywords (generic but specific keywords rank HIGHER - better for competitor research) +7. Present ranked keywords in markdown table with clear header: + "### Keyword Analysis Results" +8. IMPORTANT: State clearly at the bottom: + "🎯 **Top recommended keyword:** [KEYWORD]" +9. IMMEDIATELY transfer back to root agent + + + +- **Remove the brand name** from all extracted keywords +- Focus on product types: "compression tights", "running shoes", "sports bra" +- Include key features: "moisture-wicking", "high-waisted", "cushioned" +- Use industry-standard terms: "activewear", "athletic socks", "performance apparel" +- Avoid vague terms like just "Active" or "Socks" - add descriptive context + +Examples: +✅ "Nike Pro Compression Tights" → Extract: "compression tights" or "athletic leggings" +✅ "Adidas UltraBoost Running" → Extract: "running shoes" or "performance sneakers" +✅ "Nike Dri-FIT Sports Bra" → Extract: "sports bra" or "athletic bra" +❌ "Nike Active" → Too vague (includes brand + category only) +❌ "Active" → Too generic, needs context + + + +### Keyword Analysis Results + +| Rank | Keyword | Reason | +|------|---------|--------| +| 1 | running shoes | Specific product type, high competitor relevance | +| 2 | athletic shorts | Common product category, broad appeal | +| 3 | training pants | Alternative term for similar products | + +🎯 **Top recommended keyword:** running shoes + + + +- After showing results, transfer back immediately +- DO NOT wait for user confirmation +- Keep output concise and well-formatted + +""" diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/search_results/agent.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/search_results/agent.py new file mode 100644 index 00000000..c8d6796e --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/search_results/agent.py @@ -0,0 +1,489 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import time +import warnings +import re + +import selenium +from google.adk.agents.llm_agent import Agent +from google.adk.tools.load_artifacts_tool import load_artifacts_tool +from google.adk.tools.tool_context import ToolContext +from google.genai import types +from PIL import Image +from selenium.webdriver.common.by import By + +try: + from bs4 import BeautifulSoup # type: ignore + BS4_AVAILABLE = True +except ImportError: + BS4_AVAILABLE = False + BeautifulSoup = None # type: ignore + print("⚠️ Warning: BeautifulSoup not installed. Product extraction will fail. Run: pip install beautifulsoup4") + +from ...shared_libraries import constants +from . import prompt + +warnings.filterwarnings("ignore", category=UserWarning) + +# Lazy initialization - driver will be created only when needed +driver = None + +def _ensure_driver(): + """Lazy initialization of Selenium WebDriver - only when actually needed.""" + global driver + if driver is not None: + return driver + + if constants.DISABLE_WEB_DRIVER: + return None + + import os + # Detect if running on Cloud Run (check for K_SERVICE env var) + is_cloud_run = os.getenv('K_SERVICE') is not None + + if is_cloud_run: + # Use Chrome on Cloud Run (standard in Cloud Run containers) + print("🌩️ Detected Cloud Run environment - using Chrome") + from selenium.webdriver.chrome.service import Service as ChromeService + from selenium.webdriver.chrome.options import Options as ChromeOptions + + chrome_options = ChromeOptions() + chrome_options.add_argument("--headless=new") + chrome_options.add_argument("--no-sandbox") + chrome_options.add_argument("--disable-dev-shm-usage") + chrome_options.add_argument("--disable-gpu") + chrome_options.add_argument("--window-size=1920,1080") + chrome_options.add_argument("--remote-debugging-port=9222") + chrome_options.add_argument("--disable-extensions") + chrome_options.add_argument("--disable-software-rasterizer") + + # Selenium Manager handles ChromeDriver automatically + driver = selenium.webdriver.Chrome(options=chrome_options) + else: + # Use Firefox locally (better ARM64 Windows support) + print("💻 Detected local environment - using Firefox") + from selenium.webdriver.firefox.service import Service as FirefoxService + from selenium.webdriver.firefox.options import Options as FirefoxOptions + from webdriver_manager.firefox import GeckoDriverManager + + firefox_options = FirefoxOptions() + firefox_options.add_argument("--headless") + firefox_options.add_argument("--width=1920") + firefox_options.add_argument("--height=1080") + + # Use GeckoDriver with automatic management + driver = selenium.webdriver.Firefox( + service=FirefoxService(GeckoDriverManager().install()), + options=firefox_options + ) + + return driver + + +def go_to_url(url: str) -> str: + """Navigates the browser to the given URL.""" + if constants.DISABLE_WEB_DRIVER: + return f"Web driver is disabled. Cannot navigate to {url}. Enable DISABLE_WEB_DRIVER=0 in .env to use real Google Shopping scraping." + + driver = _ensure_driver() + if driver is None: + return "Web driver initialization failed" + + print(f"🌐 Navigating to URL: {url}") + driver.get(url.strip()) + return f"Navigated to URL: {url}" + + +async def take_screenshot(tool_context: ToolContext) -> dict: + """Takes a screenshot and saves it with the given filename. called 'load artifacts' after to load the image""" + if constants.DISABLE_WEB_DRIVER: + return {"error": "Web driver disabled. Cannot take screenshot. Set DISABLE_WEB_DRIVER=0 in .env"} + + driver = _ensure_driver() + if driver is None: + return {"error": "Web driver initialization failed"} + + timestamp = time.strftime("%Y%m%d-%H%M%S") + filename = f"screenshot_{timestamp}.png" + print(f"📸 Taking screenshot and saving as: {filename}") + driver.save_screenshot(filename) + + image = Image.open(filename) + + await tool_context.save_artifact( + filename, + types.Part.from_bytes(data=image.tobytes(), mime_type="image/png"), + ) + + return {"status": "ok", "filename": filename} + + +def click_at_coordinates(x: int, y: int) -> str: + """Clicks at the specified coordinates on the screen.""" + driver.execute_script(f"window.scrollTo({x}, {y});") + driver.find_element(By.TAG_NAME, "body").click() + + +def find_element_with_text(text: str) -> str: + """Finds an element on the page with the given text.""" + print(f"🔍 Finding element with text: '{text}'") # Added print statement + + try: + element = driver.find_element(By.XPATH, f"//*[text()='{text}']") + if element: + return "Element found." + else: + return "Element not found." + except selenium.common.exceptions.NoSuchElementException: + return "Element not found." + except selenium.common.exceptions.ElementNotInteractableException: + return "Element not interactable, cannot click." + + +def click_element_with_text(text: str) -> str: + """Clicks on an element on the page with the given text.""" + print(f"🖱️ Clicking element with text: '{text}'") # Added print statement + + try: + element = driver.find_element(By.XPATH, f"//*[text()='{text}']") + element.click() + return f"Clicked element with text: {text}" + except selenium.common.exceptions.NoSuchElementException: + return "Element not found, cannot click." + except selenium.common.exceptions.ElementNotInteractableException: + return "Element not interactable, cannot click." + except selenium.common.exceptions.ElementClickInterceptedException: + return "Element click intercepted, cannot click." + + +def enter_text_into_element(text_to_enter: str, element_id: str) -> str: + """Enters text into an element with the given ID.""" + print( + f"📝 Entering text '{text_to_enter}' into element with ID: {element_id}" + ) # Added print statement + + try: + input_element = driver.find_element(By.ID, element_id) + input_element.send_keys(text_to_enter) + return ( + f"Entered text '{text_to_enter}' into element with ID: {element_id}" + ) + except selenium.common.exceptions.NoSuchElementException: + return "Element with given ID not found." + except selenium.common.exceptions.ElementNotInteractableException: + return "Element not interactable, cannot click." + + +def scroll_down_screen() -> str: + """Scrolls down the screen by a moderate amount.""" + print("⬇️ scroll the screen") # Added print statement + driver.execute_script("window.scrollBy(0, 500)") + return "Scrolled down the screen." + + +def get_page_source() -> str: + LIMIT = 1000000 + """Returns the current page source.""" + print("📄 Getting page source...") # Added print statement + return driver.page_source[0:LIMIT] + + +def analyze_webpage_and_determine_action( + page_source: str, user_task: str, tool_context: ToolContext +) -> str: + """Analyzes the webpage and determines the next action (scroll, click, etc.).""" + print( + "🤔 Analyzing webpage and determining next action..." + ) # Added print statement + + analysis_prompt = f""" + You are an expert web page analyzer. + You have been tasked with controlling a web browser to achieve a user's goal. + The user's task is: {user_task} + Here is the current HTML source code of the webpage: + ```html + {page_source} + ``` + + Based on the webpage content and the user's task, determine the next best action to take. + Consider actions like: completing page source, scrolling down to see more content, clicking on links or buttons to navigate, or entering text into input fields. + + Think step-by-step: + 1. Briefly analyze the user's task and the webpage content. + 2. If source code appears to be incomplete, complete it to make it valid html. Keep the product titles as is. Only complete missing html syntax + 3. Identify potential interactive elements on the page (links, buttons, input fields, etc.). + 4. Determine if scrolling is necessary to reveal more content. + 5. Decide on the most logical next action to progress towards completing the user's task. + + Your response should be a concise action plan, choosing from these options: + - "COMPLETE_PAGE_SOURCE": If source code appears to be incomplete, complte it to make it valid html + - "SCROLL_DOWN": If more content needs to be loaded by scrolling. + - "CLICK: ": If a specific element with text should be clicked. Replace with the actual text of the element. + - "ENTER_TEXT: , ": If text needs to be entered into an input field. Replace with the ID of the input element and with the text to enter. + - "TASK_COMPLETED": If you believe the user's task is likely completed on this page. + - "STUCK": If you are unsure what to do next or cannot progress further. + - "ASK_USER": If you need clarification from the user on what to do next. + + If you choose "CLICK" or "ENTER_TEXT", ensure the element text or ID is clearly identifiable from the webpage source. If multiple similar elements exist, choose the most relevant one based on the user's task. + If you are unsure, or if none of the above actions seem appropriate, default to "ASK_USER". + + Example Responses: + - SCROLL_DOWN + - CLICK: Learn more + - ENTER_TEXT: search_box_id, Gemini API + - TASK_COMPLETED + - STUCK + - ASK_USER + + What is your action plan? + """ + return analysis_prompt + + +def _extract_amazon_products(keyword: str) -> list: + """ + Scrape Amazon for competing products. More stable than Google Shopping. + + Args: + keyword: Search term (e.g., "Nike Active") + + Returns: + List of product dicts with 'title' and 'price' keys, or empty list on failure + """ + try: + driver = _ensure_driver() + if driver is None: + return [] + + print("🛒 Trying Amazon...") + url = f"https://www.amazon.com/s?k={keyword.replace(' ', '+')}" + driver.get(url) + time.sleep(2) # Wait for page load + + soup = BeautifulSoup(driver.page_source, 'html.parser') + products = [] + + # Amazon product containers + items = soup.select('div[data-component-type="s-search-result"]') + + for item in items[:10]: + try: + # Title: h2 > a > span + title_elem = item.select_one('h2.s-line-clamp-2 span') + # Price: span.a-price > span.a-offscreen + price_elem = item.select_one('span.a-price span.a-offscreen') + + if title_elem: + title = title_elem.get_text(strip=True) + price = price_elem.get_text(strip=True) if price_elem else "N/A" + + if title and len(title) > 10: + products.append({"title": title, "price": price}) + if len(products) >= 3: + break + except: + continue + + if products: + print(f"✅ Amazon: Extracted {len(products)} products") + return products + + except Exception as e: + print(f"⚠️ Amazon failed: {str(e)}") + return [] + + +def _extract_google_shopping_products_internal(keyword: str) -> list: + """ + Scrape Google Shopping as fallback. + + Args: + keyword: Search term + + Returns: + List of product dicts or empty list + """ + try: + driver = _ensure_driver() + if driver is None: + return [] + + print("🔍 Trying Google Shopping...") + url = f"https://www.google.com/search?tbm=shop&q={keyword.replace(' ', '+')}" + driver.get(url) + time.sleep(2) + + soup = BeautifulSoup(driver.page_source, 'html.parser') + products = [] + + # Multiple selector patterns for Google Shopping + selectors = [ + {'container': '.sh-dgr__grid-result', 'title': '.tAxDx', 'price': '.a8Pemb'}, + {'container': '.sh-dlr__list-result', 'title': '.Xjkr3b', 'price': '.a8Pemb'}, + {'container': '.sh-np__click-target', 'title': 'h3', 'price': 'span[aria-label*="dollar"]'}, + {'container': '[data-sh-np]', 'title': '.tAxDx, .Xjkr3b', 'price': '.a8Pemb'}, + ] + + for selector_set in selectors: + containers = soup.select(selector_set['container']) + + for container in containers[:10]: + try: + title_elem = container.select_one(selector_set['title']) + price_elem = container.select_one(selector_set['price']) + + if title_elem: + title = title_elem.get_text(strip=True) + price = price_elem.get_text(strip=True) if price_elem else "N/A" + + # Clean price + price_match = re.search(r'\$[\d,]+\.?\d*', price) + if price_match: + price = price_match.group(0) + + if title and len(title) > 5: + products.append({"title": title, "price": price}) + if len(products) >= 3: + break + except: + continue + + if len(products) >= 3: + break + + if products: + print(f"✅ Google Shopping: Extracted {len(products)} products") + return products + + except Exception as e: + print(f"⚠️ Google Shopping failed: {str(e)}") + return [] + + +def extract_google_shopping_products(keyword: str = None) -> str: + """ + Multi-source product extraction with intelligent fallback: + 1. SerpAPI (if configured) - production quality, no bot detection + 2. Amazon web scraping (fallback) + 3. Google Shopping web scraping (last resort) + + Uses structured API data when available, falls back to HTML parsing. + + Args: + keyword: Search term (e.g., "moisture wicking socks"). If not provided, extracts from current URL. + + Returns: + Formatted string with extracted competitor products or error message + """ + if constants.DISABLE_WEB_DRIVER: + return "Error: Web driver is disabled. Set DISABLE_WEB_DRIVER=0 in .env to enable real product extraction." + + if not BS4_AVAILABLE: + return "Error: BeautifulSoup not installed. Run: pip install beautifulsoup4" + + try: + # If keyword not provided, try to extract from current URL + if not keyword: + driver = _ensure_driver() + if driver: + current_url = driver.current_url + # Handle both Amazon (k=) and Google Shopping (q=) URL parameters + keyword_match = re.search(r'[?&](?:q|k)=([^&]+)', current_url) + keyword = keyword_match.group(1).replace('+', ' ') if keyword_match else "product" + else: + keyword = "product" + + # DO NOT extract brand from keyword - the keyword is already brand-free from keyword_finding_agent + # Brand filtering should be done by root agent passing exclude_brand explicitly + brand = None # Don't filter by brand from keyword + + print(f"🔍 Extracting competitor products for: {keyword}") + + # Try SerpAPI first (most reliable, no bot detection) + try: + from ...tools.serp_connector import get_competitor_products + + competitors = get_competitor_products(keyword, exclude_brand=brand) + + # Check if it's an error response + if isinstance(competitors, dict) and "error" in competitors: + print(f"⚠️ SerpAPI unavailable: {competitors['message']}") + print("🌐 Falling back to web scraping...") + elif competitors: + # Format SerpAPI results - show all 10 products + result = f"### Competitor Search Results\n\nTop {len(competitors)} competing products for \"{keyword}\":\n\n" + for i, product in enumerate(competitors, 1): + result += f"{i}. {product['title']} - {product['price']}\n" + result += f"\n*Source: SerpAPI (Google Shopping)*" + return result + except ImportError: + print("⚠️ SerpAPI library not installed (pip install google-search-results)") + print("🌐 Using web scraping fallback...") + except Exception as e: + print(f"⚠️ SerpAPI error: {str(e)}") + print("🌐 Using web scraping fallback...") + + # Fallback to web scraping (may hit bot detection) + products = [] + + # Try Amazon scraping + products = _extract_amazon_products(keyword) + + # Fallback to Google Shopping scraping + if not products: + products = _extract_google_shopping_products_internal(keyword) + + # Save debug HTML from last attempt + import os + debug_file = os.path.join(os.getcwd(), "competitor_search_debug.html") + with open(debug_file, "w", encoding="utf-8") as f: + f.write(driver.page_source) + print(f"💾 Saved debug HTML to: {debug_file}") + + # Format results + if products: + result = "### Competitor Search Results\n\nTop competing products:\n\n" + for i, product in enumerate(products[:3], 1): + result += f"{i}. {product['title']} - {product['price']}\n" + print(f"✅ Successfully extracted {len(products[:3])} competitor products") + return result + else: + print("❌ All extraction sources failed") + return "Error: Could not extract competitor products from Amazon or Google Shopping. Check debug HTML file for details." + + + except Exception as e: + print(f"❌ Error extracting products: {str(e)}") + return f"Error extracting products: {str(e)}. Try taking a screenshot to see what the page looks like." + + +search_results_agent = Agent( + model=constants.MODEL, + name="search_results_agent", + description="Get top 10 search results info for a keyword using web browsing and HTML parsing", + instruction=prompt.SEARCH_RESULT_AGENT_PROMPT, + tools=[ + go_to_url, + take_screenshot, + find_element_with_text, + click_element_with_text, + enter_text_into_element, + scroll_down_screen, + get_page_source, + load_artifacts_tool, + analyze_webpage_and_determine_action, + extract_google_shopping_products, + ], +) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/search_results/prompt.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/search_results/prompt.py new file mode 100644 index 00000000..cb443efd --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/sub_agents/search_results/prompt.py @@ -0,0 +1,55 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Defines Search Results Agent Prompts""" + +SEARCH_RESULT_AGENT_PROMPT = """ + You are an autonomous web search agent that extracts competitor product data. + + + 1. When you receive a request like "Find competing products for moisture management socks", extract the keyword + - The keyword is everything after "for" (e.g., "moisture wicking socks") + + 2. IMMEDIATELY call extract_google_shopping_products(keyword="[extracted_keyword]") + - Pass the keyword as a parameter + - Example: extract_google_shopping_products(keyword="moisture wicking socks") + - DO NOT ask for confirmation + - DO NOT say "I will search" + - JUST call the tool right away + - This tool automatically tries multiple sources: + * Amazon (primary - most stable) + * Google Shopping (fallback) + - Returns real product titles and prices from whichever source succeeds + - Always gets real data - no mock results + + 4. Format the extracted results clearly: + ### Competitor Search Results + + Top 10 competing products for "[keyword]": + 1. [Product Title] - [Price] + 2. [Product Title] - [Price] + 3. [Product Title] - [Price] + + 5. IMMEDIATELY transfer back to main agent with the results + + + + - Always call extract_google_shopping_products() after navigation + - The tool name stays the same but now searches multiple sources automatically + - After showing results, transfer back immediately + - DO NOT ask follow-up questions + - Keep output clear and well-formatted + - If extraction fails from all sources, report the error clearly + +""" diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/tools/bq_connector.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/tools/bq_connector.py new file mode 100644 index 00000000..e296fd50 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/tools/bq_connector.py @@ -0,0 +1,144 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +Defines tools for brand search optimization agent. + +This tool uses Google's public BigQuery dataset 'thelook_ecommerce' - no setup required! +Users can immediately test with brands like: Nike, Adidas, Levi's, Calvin Klein, etc. +""" + +from google.cloud import bigquery + +from ..shared_libraries import constants + +# Initialize the BigQuery client outside the function +try: + client = bigquery.Client(project=constants.PROJECT) +except Exception as e: + print(f"Error initializing BigQuery client: {e}") + client = None # Set client to None if initialization fails + + +def get_categories_for_brand(brand: str): + """ + Retrieves distinct product categories available for a given brand. + This helps users select which category to analyze. + + Args: + brand (str): The brand name (e.g., "Nike", "ASICS") + + Returns: + str: A list of categories with product counts. + """ + brand = brand.strip() + + if client is None: + return "BigQuery client initialization failed. Please set GOOGLE_CLOUD_PROJECT in your .env file." + + query = """ + SELECT + category as Category, + COUNT(*) as ProductCount + FROM `bigquery-public-data.thelook_ecommerce.products` + WHERE LOWER(brand) = LOWER(@brand) + GROUP BY category + ORDER BY ProductCount DESC + """ + + job_config = bigquery.QueryJobConfig( + query_parameters=[ + bigquery.ScalarQueryParameter("brand", "STRING", brand), + ] + ) + + try: + query_job = client.query(query, job_config=job_config) + results = query_job.result() + + categories = [] + for row in results: + categories.append(f"- **{row.Category}** ({row.ProductCount} products)") + + if not categories: + return f"No products found for brand '{brand}'. Try: Nike, Adidas, Levi's, Calvin Klein, Columbia, or Puma." + + return "\n".join(categories) + + except Exception as e: + return f"Error querying categories: {str(e)}" + + +def get_product_details_for_brand(brand: str, category: str = None): + """ + Retrieves real product data from Google's public 'thelook_ecommerce' dataset. + Can optionally filter by category. + + Args: + brand (str): The brand name to search for (e.g., "Nike", "ASICS") + category (str, optional): The category to filter by (e.g., "Active", "Tops & Tees") + + Returns: + str: A markdown table with product details filtered by category if specified. + """ + brand = brand.strip() + + if client is None: + return "BigQuery client initialization failed. Please set GOOGLE_CLOUD_PROJECT in your .env file." + + # Build query with optional category filter + where_clause = f"WHERE LOWER(brand) = LOWER('{brand}')" + if category: + category = category.strip() + where_clause += f" AND LOWER(category) = LOWER('{category}')" + + query = f""" + SELECT + id, + name as Title, + category as Category, + retail_price as Price, + brand as Brand + FROM `bigquery-public-data.thelook_ecommerce.products` + {where_clause} + LIMIT 10 + """ + + try: + query_job = client.query(query) + results = query_job.result() + + # Build markdown table with public dataset schema + markdown_table = "| ID | Title | Category | Price | Brand |\n" + markdown_table += "|---|---|---|---|---|\n" + + product_count = 0 + for row in results: + product_id = row.id + title = row.Title + cat = row.Category if row.Category else "N/A" + price = f"${row.Price:.2f}" if row.Price else "N/A" + brand_name = row.Brand + + markdown_table += f"| {product_id} | {title} | {cat} | {price} | {brand_name} |\n" + product_count += 1 + + if product_count == 0: + category_note = f" in category '{category}'" if category else "" + return f"No products found for brand '{brand}'{category_note}. Try popular brands like: Nike, Adidas, Levi's, Calvin Klein, Columbia, or Puma." + + return markdown_table + + except Exception as e: + return f"Error querying public dataset: {str(e)}. Make sure GOOGLE_CLOUD_PROJECT is set in your .env file." diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/tools/serp_connector.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/tools/serp_connector.py new file mode 100644 index 00000000..9d19fd90 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/brand_search_optimization/tools/serp_connector.py @@ -0,0 +1,123 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +SerpAPI connector for reliable competitor data extraction. +Uses Google Shopping API to avoid bot detection and get structured data. +""" + +import os +from typing import List, Dict, Union, Optional + +def get_competitor_products(keyword: str, exclude_brand: Optional[str] = None) -> Union[List[Dict[str, str]], Dict[str, str]]: + """ + Get competing products using SerpAPI Google Shopping. + + Args: + keyword: Search term (e.g., "Nike Active") + exclude_brand: Brand to filter out from results (e.g., "Nike") + + Returns: + List of product dicts with 'title', 'price', 'source' keys, or error dict + """ + api_key = os.getenv("SERPAPI_KEY") + + # Check if API key is configured + if not api_key or api_key == "your_api_key_here_optional": + return { + "error": "SerpAPI key not configured", + "message": "Set SERPAPI_KEY in .env for production-quality competitor data. Get free API key (100 searches/month) at https://serpapi.com/" + } + + try: + from serpapi import GoogleSearch + + print(f"📡 SerpAPI: Querying Google Shopping for '{keyword}'") + + params = { + "engine": "google_shopping", + "q": keyword, + "api_key": api_key, + "num": 30, # Get more results to filter (need extra for 10 after brand filtering) + "hl": "en", + "gl": "us" + } + + search = GoogleSearch(params) + results = search.get_dict() + + shopping_results = results.get("shopping_results", []) + + if not shopping_results: + return { + "error": "No shopping results found", + "message": f"SerpAPI returned no results for '{keyword}'" + } + + competitors = [] + for item in shopping_results: + title = item.get("title", "") + + # Skip if title is empty + if not title or len(title) < 5: + continue + + # Filter out the target brand if specified + if exclude_brand and exclude_brand.lower() in title.lower(): + print(f" ⊖ Filtered out: {title[:50]}...") + continue + + price = item.get("price", item.get("extracted_price", "N/A")) + source = item.get("source", "Unknown") + + # Extract keywords from title (split by common separators) + keywords = [] + # Remove special characters and split + cleaned_title = title.replace("-", " ").replace("|", " ").replace(",", " ") + words = cleaned_title.split() + # Get meaningful words (filter out very short ones) + keywords = [w.strip() for w in words if len(w.strip()) > 2] + + competitors.append({ + "title": title, + "price": str(price), + "keywords": keywords, + "source": source + }) + + print(f" ✓ Found: {title[:60]}... - ${price}") + + if len(competitors) >= 10: + break + + if not competitors: + return { + "error": "No competitor products found", + "message": f"All results were from '{exclude_brand}' or filtered out" + } + + print(f"✅ SerpAPI: Successfully extracted {len(competitors)} competitor products") + return competitors + + except ImportError: + return { + "error": "SerpAPI library not installed", + "message": "Run: pip install google-search-results" + } + except Exception as e: + print(f"❌ SerpAPI error: {str(e)}") + return { + "error": "SerpAPI query failed", + "message": str(e) + } diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/__init__.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/__init__.py new file mode 100644 index 00000000..0a2669d7 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/bq_data_setup.sql b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/bq_data_setup.sql new file mode 100644 index 00000000..02ebafe0 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/bq_data_setup.sql @@ -0,0 +1,34 @@ +-- Create the table in BigQuery +CREATE TABLE IF NOT EXISTS your_project_id.your_dataset_id.products ( + Title STRING, + Description STRING, + Attributes STRING, + Brand STRING +); +-- Notes: +-- - Replace your_project_id with your Google Cloud Project ID. +-- - Replace your_dataset_id with the name of your BigQuery Dataset. + +-- Insert data into the table +INSERT INTO your_project_id.your_dataset_id.products + (Title, + Description, + Attributes, + Brand) +VALUES ('Kids\' Joggers', +'Comfortable and supportive running shoes for active kids. Breathable mesh upper keeps feet cool, while the durable outsole provides excellent traction.' + , +'Size: 10 Toddler, Color: Blue/Green', +'BSOAgentTestBrand'), + ('Light-Up Sneakers', +'Fun and stylish sneakers with light-up features that kids will love. Supportive and comfortable for all-day play.' + , +'Size: 13 Toddler, Color: Silver', +'BSOAgentTestBrand'), + ('School Shoes', +'Versatile and comfortable shoes perfect for everyday wear at school. Durable construction with a supportive design.' + , +'Size: 12 Preschool, Color: Black', +'BSOAgentTestBrand'); +-- Notes: +-- - Ensure the project and dataset IDs match the ones used in the CREATE TABLE statement. \ No newline at end of file diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/bq_populate_data.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/bq_populate_data.py new file mode 100644 index 00000000..20cde8b4 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/bq_populate_data.py @@ -0,0 +1,100 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.cloud import bigquery + +from brand_search_optimization.shared_libraries import constants + +PROJECT = constants.PROJECT +TABLE_ID = constants.TABLE_ID +LOCATION = constants.LOCATION +DATASET_ID = constants.DATASET_ID +TABLE_ID = constants.TABLE_ID + +client = bigquery.Client(project=PROJECT) + +# Sample data to insert +data_to_insert = [ + { + "Title": "Kids' Joggers", + "Description": "Comfortable and supportive running shoes for active kids. Breathable mesh upper keeps feet cool, while the durable outsole provides excellent traction.", + "Attributes": "Size: 10 Toddler, Color: Blue/Green", + "Brand": "BSOAgentTestBrand", + }, + { + "Title": "Light-Up Sneakers", + "Description": "Fun and stylish sneakers with light-up features that kids will love. Supportive and comfortable for all-day play.", + "Attributes": "Size: 13 Toddler, Color: Silver", + "Brand": "BSOAgentTestBrand", + }, + { + "Title": "School Shoes", + "Description": "Versatile and comfortable shoes perfect for everyday wear at school. Durable construction with a supportive design.", + "Attributes": "Size: 12 Preschool, Color: Black", + "Brand": "BSOAgentTestBrand", + }, +] + + +def create_dataset_if_not_exists(): + """Creates a BigQuery dataset if it does not already exist.""" + # Construct a BigQuery client object. + dataset_id = f"{client.project}.{DATASET_ID}" + dataset = bigquery.Dataset(dataset_id) + dataset.location = "US" + client.delete_dataset( + dataset_id, delete_contents=True, not_found_ok=True + ) # Make an API request. + dataset = client.create_dataset(dataset) # Make an API request. + print(f"Created dataset {client.project}.{dataset.dataset_id}") + return dataset + + +def populate_bigquery_table(): + """Populates a BigQuery table with the provided data.""" + dataset_ref = create_dataset_if_not_exists() + if not dataset_ref: + return + + # Define the schema based on your CREATE TABLE statement + schema = [ + bigquery.SchemaField("Title", "STRING"), + bigquery.SchemaField("Description", "STRING"), + bigquery.SchemaField("Attributes", "STRING"), + bigquery.SchemaField("Brand", "STRING"), + ] + table_id = f"{PROJECT}.{DATASET_ID}.{TABLE_ID}" + table = bigquery.Table(table_id, schema=schema) + client.delete_table(table_id, not_found_ok=True) # Make an API request. + print(f"Deleted table '{table_id}'.") + table = client.create_table(table) # Make an API request. + print(f"Created table {PROJECT}.{table.dataset_id}.{table.table_id}") + + errors = client.insert_rows_json(table=table, json_rows=data_to_insert) + + if errors == []: + print( + f"Successfully inserted {len(data_to_insert)} rows into {PROJECT}.{DATASET_ID}.{TABLE_ID}" + ) + else: + print("Errors occurred while inserting rows:") + for error in errors: + print(error) + + +if __name__ == "__main__": + populate_bigquery_table() + print( + "\n--- Instructions on how to add permissions to BQ Table are in the customiztion.md file ---" + ) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/deploy.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/deploy.py new file mode 100644 index 00000000..3146eb72 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/deploy.py @@ -0,0 +1,112 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Deployment script for Brand Search Optimization agent.""" + +import vertexai +from absl import app, flags +from vertexai import agent_engines +from vertexai.preview.reasoning_engines import AdkApp + +from brand_search_optimization.agent import root_agent +from brand_search_optimization.shared_libraries import constants + +FLAGS = flags.FLAGS +flags.DEFINE_string("project_id", None, "GCP project ID.") +flags.DEFINE_string("location", None, "GCP location.") +flags.DEFINE_string("bucket", None, "GCP bucket.") +flags.DEFINE_string("resource_id", None, "ReasoningEngine resource ID.") +flags.DEFINE_bool("create", False, "Create a new agent.") +flags.DEFINE_bool("delete", False, "Delete an existing agent.") +flags.mark_bool_flags_as_mutual_exclusive(["create", "delete"]) + + +def create(env_vars: dict) -> None: + adk_app = AdkApp( + agent=root_agent, + enable_tracing=True, + ) + + extra_packages = ["./brand_search_optimization"] + + remote_agent = agent_engines.create( + adk_app, + requirements=[ + "google-adk>=1.0.0,<2.0.0", + "google-cloud-aiplatform[agent_engines]>=1.93.0", + "pydantic", + "requests", + "python-dotenv", + "google-genai", + "selenium", + "webdriver-manager", + "google-cloud-bigquery", + "absl-py", + "pillow", + ], + extra_packages=extra_packages, + env_vars=env_vars, + ) + print(f"Created remote agent: {remote_agent.resource_name}") + + +def delete(resource_id: str) -> None: + remote_agent = agent_engines.get(resource_id) + remote_agent.delete(force=True) + print(f"Deleted remote agent: {resource_id}") + + +def main(argv: list[str]) -> None: + project_id = FLAGS.project_id if FLAGS.project_id else constants.PROJECT + location = FLAGS.location if FLAGS.location else constants.LOCATION + bucket = FLAGS.bucket if FLAGS.bucket else constants.STAGING_BUCKET + env_vars = {} + + print(f"PROJECT: {project_id}") + print(f"LOCATION: {location}") + print(f"BUCKET: {bucket}") + + if not project_id: + print("Missing required environment variable: GOOGLE_CLOUD_PROJECT") + return + elif not location: + print("Missing required environment variable: GOOGLE_CLOUD_LOCATION") + return + elif not bucket: + print( + "Missing required environment variable: GOOGLE_CLOUD_STORAGE_BUCKET" + ) + return + + env_vars["DISABLE_WEB_DRIVER"] = "1" + + vertexai.init( + project=project_id, + location=location, + staging_bucket=f"gs://{bucket}", + ) + + if FLAGS.create: + create(env_vars) + elif FLAGS.delete: + if not FLAGS.resource_id: + print("resource_id is required for delete") + return + delete(FLAGS.resource_id) + else: + print("Unknown command") + + +if __name__ == "__main__": + app.run(main) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/eval.sh b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/eval.sh new file mode 100644 index 00000000..2126dc93 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/eval.sh @@ -0,0 +1,45 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -x + +prepare(){ + touch __init__.py + export PYTHONPATH=:. +} + +remove_selenium(){ + rm -rf selenium +} + +run_eval(){ + adk eval \ + brand_search_optimization \ + eval/data/eval_data1.evalset.json \ + --config_file_path eval/data/test_config.json +} + +main(){ + echo " + You must be inside brand-search-optimization dir and then + # sh deployment/eval/eval.sh + " + prepare + remove_selenium + run_eval +} + +main + +exit 0 diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/run.sh b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/run.sh new file mode 100644 index 00000000..9abd5e46 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/run.sh @@ -0,0 +1,44 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -x +set -e + +# Determine the directory where this script resides +SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) +# Assume the project root directory is one level up from the script's directory +ROOT_DIR=$(dirname "$SCRIPT_DIR") + +install_prereqs(){ + echo "--- Changing to root directory ($ROOT_DIR) to install prerequisites ---" + # Execute poetry install within a subshell, changing directory first + (cd "$ROOT_DIR" && poetry install) + echo "--- Prerequisites installation finished ---" +} + +populate_bq_data(){ + echo "--- Changing to root directory ($ROOT_DIR) to populate BigQuery data ---" + # Execute the python script from the root directory within a subshell + (cd "$ROOT_DIR" && python -m deployment.bq_populate_data) + echo "--- BigQuery population finished ---" +} + +main(){ + install_prereqs + populate_bq_data +} + +main + +exit 0 diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/test.sh b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/test.sh new file mode 100644 index 00000000..cac786ce --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/test.sh @@ -0,0 +1,27 @@ +# Copyright 2025 Google LLC +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -x + +run_unit_tests(){ + # Make sure you are inside brand-search-optimization directory + # And ENABLE_UNIT_TEST_MODE=1 in .env + export PYTHONPATH="$PYTHONPATH:." + pytest tests/ +} + +run_unit_tests + +exit 0 \ No newline at end of file diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/test_deployment.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/test_deployment.py new file mode 100644 index 00000000..b55f2392 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/deployment/test_deployment.py @@ -0,0 +1,119 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Test deployment of FOMC Research Agent to Agent Engine.""" + +import asyncio +import os + +import vertexai +from absl import app, flags +from dotenv import load_dotenv +from google.adk.sessions import VertexAiSessionService +from vertexai import agent_engines + +FLAGS = flags.FLAGS + +flags.DEFINE_string("project_id", None, "GCP project ID.") +flags.DEFINE_string("location", None, "GCP location.") +flags.DEFINE_string("bucket", None, "GCP bucket.") +flags.DEFINE_string( + "resource_id", + None, + "ReasoningEngine resource ID (returned after deploying the agent)", +) +flags.DEFINE_string("user_id", None, "User ID (can be any string).") +flags.mark_flag_as_required("resource_id") +flags.mark_flag_as_required("user_id") + + +def main(argv: list[str]) -> None: # pylint: disable=unused-argument + load_dotenv() + + project_id = ( + FLAGS.project_id + if FLAGS.project_id + else os.getenv("GOOGLE_CLOUD_PROJECT") + ) + location = ( + FLAGS.location if FLAGS.location else os.getenv("GOOGLE_CLOUD_LOCATION") + ) + bucket = ( + FLAGS.bucket + if FLAGS.bucket + else os.getenv("GOOGLE_CLOUD_STORAGE_BUCKET") + ) + + project_id = os.getenv("GOOGLE_CLOUD_PROJECT") + location = os.getenv("GOOGLE_CLOUD_LOCATION") + bucket = os.getenv("GOOGLE_CLOUD_STORAGE_BUCKET") + + if not project_id: + print("Missing required environment variable: GOOGLE_CLOUD_PROJECT") + return + elif not location: + print("Missing required environment variable: GOOGLE_CLOUD_LOCATION") + return + elif not bucket: + print( + "Missing required environment variable: GOOGLE_CLOUD_STORAGE_BUCKET" + ) + return + + vertexai.init( + project=project_id, + location=location, + staging_bucket=f"gs://{bucket}", + ) + + session_service = VertexAiSessionService(project_id, location) + session = asyncio.run( + session_service.create_session( + app_name=FLAGS.resource_id, user_id=FLAGS.user_id + ) + ) + + agent = agent_engines.get(FLAGS.resource_id) + print(f"Found agent with resource ID: {FLAGS.resource_id}") + + print(f"Created session for user ID: {FLAGS.user_id}") + print("Type 'quit' to exit.") + while True: + user_input = input("Input: ") + if user_input == "quit": + break + + for event in agent.stream_query( + user_id=FLAGS.user_id, session_id=session.id, message=user_input + ): + if "content" in event: + if "parts" in event["content"]: + parts = event["content"]["parts"] + for part in parts: + if "text" in part: + text_part = part["text"] + print(f"Response: {text_part}") + + asyncio.run( + session_service.delete_session( + app_name=FLAGS.resource_id, + user_id=FLAGS.user_id, + session_id=session.id, + ) + ) + print(f"Deleted session for user ID: {FLAGS.user_id}") + + +if __name__ == "__main__": + app.run(main) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/__init__.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/__init__.py new file mode 100644 index 00000000..0a2669d7 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/data/eval_data1.evalset.json b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/data/eval_data1.evalset.json new file mode 100644 index 00000000..df0a1ea8 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/data/eval_data1.evalset.json @@ -0,0 +1,121 @@ +[ + { + "name": "eval_data_set_google_shopping", + "data": [ + { + "query": "hello", + "expected_tool_use": [], + "expected_intermediate_agent_responses": [], + "reference": "Hi there! To get started, could you please provide the brand name you'd like to optimize for?" + }, + { + "query": "BSOAgentTestBrand", + "expected_tool_use": [ + { + "tool_name": "transfer_to_agent", + "tool_input": { + "agent_name": "keyword_finding_agent" + } + }, + { + "tool_name": "get_product_details_for_brand", + "tool_input": {} + } + ], + "expected_intermediate_agent_responses": [ + { + "author": "brand_search_optimization", + "text": "Okay, great. Now that I have the brand name, I'll start by finding some relevant keywords.\n\n" + } + ], + "reference": "| Title | Description | Attributes | Brand |\n|---|---|---|---|\n| Kids' Joggers | Comfortable and supportive running shoes for active kids. Breathable mesh upper keeps feet cool, while the durable outsole provides excellent traction. | Size: 10 Toddler, Color: Blue/Green | BSOAgentTestBrand |\n| Light-Up Sneakers | Fun and stylish sneakers with light-up features that kids will love. Supportive and comfortable for all-day play. | Size: 13 Toddler, Color: Silver | BSOAgentTestBrand |\n| School Shoes | Versatile and comfortable shoes perfect for everyday wear at school. Durable construction with a supportive design. | Size: 12 Preschool, Color: Black | BSOAgentTestBrand |\n\nHere are some keywords shoppers might use:\n\n* running shoes\n* active shoes\n* kids shoes\n* sneakers\n* light up sneakers\n* school shoes\n* toddler shoes\n* preschool shoes\n* joggers\n* kids joggers\n\nHere is the ranked list\n\n| Keywords |\n|---|---|\n| running shoes |\n| active shoes |\n| kids shoes |\n| sneakers |\n| light up sneakers |\n| school shoes |\n| toddler shoes |\n| preschool shoes |\n| joggers |\n| kids joggers |\n" + }, + { + "query": "can you search websites", + "expected_tool_use": [ + { + "tool_name": "transfer_to_agent", + "tool_input": { + "agent_name": "search_results_agent" + } + } + ], + "expected_intermediate_agent_responses": [ + { + "author": "keyword_finding_agent", + "text": "I am designed to find keywords for a given brand based on product details. To search websites for information, I would need to transfer you to a different agent. Would you like me to do that?\n" + } + ], + "reference": "Which website do you want to visit?\n" + }, + { + "query": "google shopping", + "expected_tool_use": [], + "expected_intermediate_agent_responses": [], + "reference": "What keyword are you looking for?\n" + }, + { + "query": "kids shoes", + "expected_tool_use": [ + { + "tool_name": "go_to_url", + "tool_input": { + "url": "https://www.google.com/search?q=kids shoes" + } + }, + { + "tool_name": "click_element_with_text", + "tool_input": { + "text": "Shopping" + } + }, + { + "tool_name": "get_page_source", + "tool_input": {} + } + ], + "expected_intermediate_agent_responses": [], + "reference": "Okay, I have analyzed the webpage and here are the top 3 products that matched your search query:\n\n* **Nike Kids\\' Flex Runner 3 Shoes**\n* **Kids adidas Racer TR23 Shoes Kids**\n* **Nike Kids\\' Court Borough Low Recraft Shoes**\n\nCould you please tell me if you want more information on any of these products or if you have another question?\n" + }, + { + "query": "this is good", + "expected_tool_use": [ + { + "tool_name": "transfer_to_agent", + "tool_input": { + "agent_name": "brand_search_optimization" + } + }, + { + "tool_name": "transfer_to_agent", + "tool_input": { + "agent_name": "comparison_root_agent" + } + }, + { + "tool_name": "transfer_to_agent", + "tool_input": { + "agent_name": "comparison_generator_agent" + } + }, + { + "tool_name": "transfer_to_agent", + "tool_input": { + "agent_name": "comparison_generator_agent" + } + } + ], + "expected_intermediate_agent_responses": [ + { + "author": "brand_search_optimization", + "text": "Great! Now that I have the search results for \"kids shoes,\" I will run a comparison of the titles.\n\n" + } + ], + "reference": "I can compare the titles gathered from the search results with the titles of the products for the brand \"BSOAgentTestBrand\".\nHere's a comparison report:\n\n| **BSOAgentTestBrand Titles** | **Search Result Titles** | **Analysis & Suggestions** |\n|---|---|---|\n| Kids' Joggers | Nike Kids' Flex Runner 3 Shoes | **Missing Keywords:** \"Nike\" \"Flex Runner 3\" ; Consider including competitor brands or related keywords like 'sneakers' or 'running shoes' in the title. |\n| Light-Up Sneakers | Kids adidas Racer TR23 Shoes Kids | **Missing Keywords:** \"Adidas\" \"Racer TR23\" \"Shoes\"; Including specific model names can improve targeting of users searching for those models.|\n| School Shoes | Nike Kids' Court Borough Low Recraft Shoes | **Missing Keywords:** \"Nike\" \"Court Borough\" \"Low\" \"Recraft\" ; Consider a title that more closely matches common search terms for the specific product type. |\n\n**Summary of Suggestions:**\n\n* **Incorporate Brand Names:** Add brand names (Nike, Adidas) to the titles.\n* **Include Model Names:** Use specific model names (Flex Runner 3, Racer TR23, Court Borough, Low, Recraft) in the titles.\n* **Add Keywords:** Consider adding keywords like \"sneakers,\" \"running shoes,\" \"light up,\" or \"school\" to titles where relevant to improve general search visibility.\n" + } + ], + "initial_state": { + "session": {} + } + } +] \ No newline at end of file diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/data/test_config.json b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/data/test_config.json new file mode 100644 index 00000000..8fb2ff88 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/data/test_config.json @@ -0,0 +1,6 @@ +{ + "criteria": { + "tool_trajectory_avg_score": 0.2, + "response_match_score": 0.2 + } +} \ No newline at end of file diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/test_eval.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/test_eval.py new file mode 100644 index 00000000..2e26ddf5 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/eval/test_eval.py @@ -0,0 +1,38 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pathlib + +import dotenv +import pytest +from google.adk.evaluation.agent_evaluator import AgentEvaluator + +pytest_plugins = ("pytest_asyncio",) + + +@pytest.fixture(scope="session", autouse=True) +def load_env(): + dotenv.load_dotenv() + + +@pytest.mark.asyncio +async def test_all(): + """Test the agent's basic ability on a few examples.""" + await AgentEvaluator.evaluate( + agent_module="brand_search_optimization", + eval_dataset_file_path_or_dir=str( + pathlib.Path(__file__).parent / "data" + ), + num_runs=1, + ) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/pyproject.toml b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/pyproject.toml new file mode 100644 index 00000000..8c3fb921 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/pyproject.toml @@ -0,0 +1,54 @@ +[project] +name = "brand-search-optimization" +version = "0.1.0" +description = "Brand Search Optimization ADK Agent Designed to enhance product titles for retail brand search. It retrieves top keywords, performs searches, and analyzes top results to provide suggestions for enriching product titles" +authors = [{ name = "Nikhil Kulkarni", email = "nikhilkul@google.com" }] +license = {text = "Apache License 2.0"} +readme = "README.md" +requires-python = ">=3.11" + +dependencies = [ + "google-genai>=1.5.0,<2.0.0", + "selenium>=4.30.0,<5.0.0", + "webdriver-manager>=4.0.2,<5.0.0", + "google-cloud-bigquery>=3.31.0,<4.0.0", + "absl-py>=2.2.2,<3.0.0", + "google-cloud-aiplatform[agent-engines]>=1.93.0,<2.0.0", + "pillow>=11.1.0,<12.0.0", + "google-adk[a2a]>=1.0.0,<2.0.0", + "beautifulsoup4>=4.12.0,<5.0.0", + "lxml>=5.0.0,<6.0.0", + "google-search-results>=2.4.2,<3.0.0", + "uvicorn>=0.32.0,<1.0.0", +] + +[dependency-groups] +dev = [ + "google-adk[eval]>=1.0.0", + "pytest-asyncio>=0.26.0", + "pytest>=8.3.5", +] + +[tool.ruff] +# Point back to the Master template in the root +extend = "../../../pyproject.toml" + +[tool.ruff.lint] +# Ignores function complication rule +ignore = ["C901"] + +[tool.ruff.lint.per-file-ignores] +"__init__.py" = ["F401"] + + +[tool.ruff.lint.isort] +# Update "brand_search_optimization" to match your actual folder name if it differs +known-first-party = ["brand_search_optimization", "app"] + +[tool.hatch.build.targets.wheel] +# Ensures Hatchling bundles the correct source directories +packages = ["brand_search_optimization", "app"] + +[build-system] +requires = ["hatchling"] +build-backend = "hatchling.build" \ No newline at end of file diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/run_a2a.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/run_a2a.py new file mode 100644 index 00000000..c47edea1 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/run_a2a.py @@ -0,0 +1,174 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +A2A Entry Point - Exposes the Brand Search Optimization Agent via Agent-to-Agent Protocol. + +This file wraps the existing agent with the A2A adapter, automatically creating: +- Agent Card at .well-known/agent-card.json (for agent discovery) +- REST API endpoints (for Copilot Studio and other A2A clients) +- Session management (for multi-turn conversations) + +Usage: + python run_a2a.py + +This will start a server on http://localhost:8080 that exposes the agent +via the Agent-to-Agent protocol, making it discoverable by Copilot Studio +and other A2A-compatible clients. +""" + +import os +import logging +import httpx +import uvicorn +from dotenv import load_dotenv +from starlette.middleware.cors import CORSMiddleware +from starlette.middleware.base import BaseHTTPMiddleware +from starlette.requests import Request +from starlette.applications import Starlette + +from a2a.types import AgentCapabilities +from a2a.server.apps import A2AStarletteApplication +from a2a.server.request_handlers import DefaultRequestHandler +from a2a.server.tasks import InMemoryTaskStore, InMemoryPushNotificationConfigStore, BasePushNotificationSender + +from google.adk.a2a.executor.a2a_agent_executor import A2aAgentExecutor +from google.adk.a2a.utils.agent_card_builder import AgentCardBuilder +from google.adk.runners import Runner +from google.adk.artifacts import InMemoryArtifactService +from google.adk.sessions import InMemorySessionService +from google.adk.memory import InMemoryMemoryService +from google.adk.auth.credential_service.in_memory_credential_service import InMemoryCredentialService + +from brand_search_optimization.agent import root_agent + +# Load .env file +load_dotenv() + +# Enable verbose logging +logging.basicConfig(level=logging.DEBUG) +logger = logging.getLogger("a2a_server") + + +class RequestLoggingMiddleware(BaseHTTPMiddleware): + """Log all incoming requests with details for debugging.""" + async def dispatch(self, request: Request, call_next): + body = b"" + if request.method in ("POST", "PUT", "PATCH"): + body = await request.body() + logger.info( + f"[REQ] {request.method} {request.url.path} " + f"from {request.client.host if request.client else 'unknown'} " + f"Content-Type: {request.headers.get('content-type', 'N/A')}" + ) + if body: + # Truncate large bodies for readability + body_str = body.decode("utf-8", errors="replace")[:2000] + logger.info(f"[BODY] Request body: {body_str}") + response = await call_next(request) + logger.info(f"[RSP] Response: {response.status_code} for {request.method} {request.url.path}") + return response + +# A2A agent card URL configuration +# When using Dev Tunnel or a public deployment, set these so the agent card +# advertises the correct reachable URL to Copilot Studio. +a2a_host = os.getenv("A2A_HOST", "localhost") +a2a_port = int(os.getenv("A2A_PORT", "8080")) +a2a_protocol = os.getenv("A2A_PROTOCOL", "http") +rpc_url = f"{a2a_protocol}://{a2a_host}:{a2a_port}/" + +# --- Build A2A components with SSE streaming + push notifications enabled --- + +# 1. Runner factory — creates ADK runner with in-memory services +async def create_runner() -> Runner: + return Runner( + app_name=root_agent.name or "adk_agent", + agent=root_agent, + artifact_service=InMemoryArtifactService(), + session_service=InMemorySessionService(), + memory_service=InMemoryMemoryService(), + credential_service=InMemoryCredentialService(), + ) + +# 2. Task store & push notification config store (in-memory) +task_store = InMemoryTaskStore() +push_config_store = InMemoryPushNotificationConfigStore() + +# 3. Agent executor (bridges ADK agent ↔ A2A protocol) +agent_executor = A2aAgentExecutor(runner=create_runner) + +# 3b. Push notification sender — POSTs task updates to registered webhook URLs +httpx_client = httpx.AsyncClient(timeout=30.0) +push_sender = BasePushNotificationSender( + httpx_client=httpx_client, + config_store=push_config_store, +) + +# 4. Request handler — handles message/send, message/stream, push config RPCs +request_handler = DefaultRequestHandler( + agent_executor=agent_executor, + task_store=task_store, + push_config_store=push_config_store, + push_sender=push_sender, +) + +# 5. Agent card with capabilities — SSE streaming + push notifications enabled +agent_card_builder = AgentCardBuilder( + agent=root_agent, + rpc_url=rpc_url, + capabilities=AgentCapabilities( + streaming=True, # Enable SSE via message/stream + push_notifications=True, # Enable push notifications + state_transition_history=True, # Expose task state change history + ), +) + +# 6. Build Starlette app with A2A routes +app = Starlette() + +async def setup_a2a(): + """Build agent card and register A2A routes on startup.""" + agent_card = await agent_card_builder.build() + logger.info(f"[OK] Agent card built: capabilities={agent_card.capabilities}") + a2a_app = A2AStarletteApplication( + agent_card=agent_card, + http_handler=request_handler, + ) + a2a_app.add_routes_to_app(app) + +app.add_event_handler("startup", setup_a2a) + +# Add CORS middleware so Copilot Studio's browser frontend can fetch the agent card +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], + allow_methods=["*"], + allow_headers=["*"], +) + +# Add request logging middleware for debugging +app.add_middleware(RequestLoggingMiddleware) + +if __name__ == "__main__": + # Start the A2A server + # - Serves agent card at /.well-known/agent-card.json + # - Exposes REST endpoints for A2A protocol + # - Handles session management automatically + port = int(os.getenv("PORT", "8080")) + print(f"\n[*] Starting A2A Server on http://localhost:{port}") + print(f"[i] Agent Card available at: http://localhost:{port}/.well-known/agent-card.json") + print(f"[i] Agent Card URL advertised: {a2a_protocol}://{a2a_host}:{a2a_port}") + print(f"[>] Use this URL in Copilot Studio to connect to this agent\n") + + uvicorn.run(app, host="0.0.0.0", port=port) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/tests/unit/test_tools.py b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/tests/unit/test_tools.py new file mode 100644 index 00000000..a769abf4 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/adk-agent/tests/unit/test_tools.py @@ -0,0 +1,65 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Unit tests for tools""" + +from unittest.mock import MagicMock, patch + +from google.adk.tools import ToolContext + +from brand_search_optimization.shared_libraries import constants +from brand_search_optimization.tools import bq_connector + + +class TestBrandSearchOptimization: + @patch("brand_search_optimization.tools.bq_connector.client") + def test_get_product_details_for_brand_success(self, mock_client): + # Mock ToolContext + mock_tool_context = MagicMock(spec=ToolContext) + mock_tool_context.user_content.parts = [MagicMock(text="cymbal")] + + # Mock BigQuery results + mock_row1 = MagicMock( + title="cymbal Air Max", + description="Comfortable running shoes", + attribute="Size: 10, Color: Blue", + brand="cymbal", + ) + mock_row2 = MagicMock( + title="cymbal Sportswear T-Shirt", + description="Cotton blend, short sleeve", + attribute="Size: L, Color: Black", + brand="cymbal", + ) + mock_row3 = MagicMock( + title="neuravibe Pro Training Shorts", + description="Moisture-wicking fabric", + attribute="Size: M, Color: Gray", + brand="neuravibe", + ) + mock_results = [mock_row1, mock_row2, mock_row3] + + # Mock QueryJob and its result + mock_query_job = MagicMock() + mock_query_job.result.return_value = mock_results + mock_client.query.return_value = mock_query_job + + # Mock constants + with patch.object(constants, "PROJECT", "test_project"): + with patch.object(constants, "TABLE_ID", "test_table"): + # Call the function + markdown_output = bq_connector.get_product_details_for_brand( + mock_tool_context + ) + assert "neuravibe Pro" not in markdown_output diff --git a/samples/python/m365-agents-sdk-a2a-patterns/docs/A2A_PATTERNS.md b/samples/python/m365-agents-sdk-a2a-patterns/docs/A2A_PATTERNS.md new file mode 100644 index 00000000..fa790806 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/docs/A2A_PATTERNS.md @@ -0,0 +1,321 @@ +# A2A Transport Patterns + +This document explains the four Agent-to-Agent (A2A) transport patterns implemented in this reference, with protocol-level details and sequence diagrams. + +--- + +## Overview + +| Pattern | Transport | Latency | Use Case | +|---------|-----------|---------|----------| +| **Sync (Ping)** | HTTP POST → JSON response | Blocks until complete | Simple queries, testing | +| **SSE Streaming** | HTTP POST → `text/event-stream` | Real-time chunks | UX with progress updates | +| **Push Notification** | HTTP POST → webhook callback | Async, fire-and-forget | Background tasks, long jobs | +| **Webhook Status** | GET → accumulated events | On-demand polling | Monitoring, audit trails | + +--- + +## Pattern 1: Synchronous (message/send) + +**How it works**: Client sends a JSON-RPC request, server blocks until the full response is ready, then returns it. + +``` +Client ADK Agent + │ │ + │─── POST / ────────────────────▶│ + │ {"jsonrpc":"2.0", │ + │ "method":"message/send", │ + │ "params":{"message":{...}}}│ + │ │── Agent processes ──▶ + │ │ │ + │◀── 200 OK ─────────────────────│◀────────────────────│ + │ {"result":{"id":"task-1", │ + │ "status":{"state":"completed"}, + │ "artifacts":[{...}]}} │ +``` + +**JSON-RPC Request**: +```json +{ + "jsonrpc": "2.0", + "id": "req-001", + "method": "message/send", + "params": { + "message": { + "role": "user", + "parts": [{"text": "Analyze Nike brand keywords"}], + "messageId": "msg-001" + } + } +} +``` + +**JSON-RPC Response**: +```json +{ + "jsonrpc": "2.0", + "id": "req-001", + "result": { + "id": "task-abc123", + "status": { + "state": "completed", + "message": { + "role": "agent", + "parts": [{"text": "Here is the brand analysis..."}] + } + }, + "artifacts": [ + { + "parts": [{"text": "Detailed analysis content..."}] + } + ] + } +} +``` + +**When to use**: Simple request-response patterns, testing, or when the client can afford to wait. + +--- + +## Pattern 2: SSE Streaming (message/stream) + +**How it works**: Client sends a request; server responds with a `text/event-stream` that delivers incremental status updates as the agent progresses through states. + +``` +Client ADK Agent + │ │ + │─── POST / ────────────────────▶│ + │ {"method":"message/stream"} │ + │ │ + │◀── SSE: submitted ────────────│ + │◀── SSE: working ──────────────│ + │◀── SSE: working (artifact) ───│ + │◀── SSE: completed ────────────│ + │◀── [stream closed] ───────────│ +``` + +**SSE Event Format**: +``` +data: {"jsonrpc":"2.0","id":"req-001","result":{"id":"task-xyz","status":{"state":"submitted"}}} + +data: {"jsonrpc":"2.0","id":"req-001","result":{"id":"task-xyz","status":{"state":"working","message":{"role":"agent","parts":[{"text":"Searching..."}]}}}} + +data: {"jsonrpc":"2.0","id":"req-001","result":{"id":"task-xyz","status":{"state":"completed"},"artifacts":[...]}} +``` + +**Key differences from sync**: +- Method is `message/stream` instead of `message/send` +- Response is `Content-Type: text/event-stream` +- Multiple events arrive incrementally +- Client sees state transitions in real time + +**When to use**: UIs that need progress indicators, long-running analyses, or when you want to show partial results. + +--- + +## Pattern 3: Push Notification (webhook callback) + +**How it works**: Client registers a webhook URL alongside the message. Server processes asynchronously and POSTs the result to the webhook when done. + +``` +Client ADK Agent Webhook Server + │ │ │ + │─── POST / (message/send) ──────▶│ │ + │ + pushNotificationConfig │ │ + │ {url: "http://host/webhook"}│ │ + │ │ │ + │◀── 200 OK (task submitted) ────│ │ + │ │ │ + │ │── Process async ──▶ │ + │ │ │ + │ │── POST /webhook ────▶│ + │ │ {task result} │ + │ │ │ + │─── GET /webhook/status ────────────────────────────▶│ + │◀── {notifications received} ───────────────────────│ +``` + +**Registration** (inline with message): +```json +{ + "jsonrpc": "2.0", + "id": "req-push", + "method": "message/send", + "params": { + "message": { + "role": "user", + "parts": [{"text": "Analyze Adidas brand"}], + "messageId": "msg-push" + }, + "pushNotificationConfig": { + "url": "http://localhost:3978/a2a/webhook", + "authentication": null + } + } +} +``` + +**Alternatively, register after task creation**: +```json +{ + "jsonrpc": "2.0", + "id": "req-push-config", + "method": "tasks/pushNotificationConfig/set", + "params": { + "taskId": "task-abc123", + "pushNotificationConfig": { + "url": "http://localhost:3978/a2a/webhook" + } + } +} +``` + +> **Note**: The A2A library uses the `tasks/` prefix for the method name (not just `pushNotificationConfig/set`), and the field is `taskId` (not `id`). + +**Webhook Payload** (what the server POSTs to your callback): +```json +{ + "jsonrpc": "2.0", + "method": "tasks/pushNotification/send", + "params": { + "taskId": "task-abc123", + "status": { + "state": "completed", + "message": { + "role": "agent", + "parts": [{"text": "Analysis complete..."}] + } + } + } +} +``` + +**When to use**: Long-running tasks, batch processing, or when the client shouldn't hold an HTTP connection open. + +--- + +## Pattern 4: Webhook Status (polling) + +**How it works**: After registering push notifications, the client can poll the webhook receiver to see all accumulated notifications. + +``` +Client Webhook Server + │ │ + │─── GET /a2a/webhook ──────────▶│ + │ │ + │◀── 200 OK ─────────────────────│ + │ {"total": 3, │ + │ "notifications": [...]} │ +``` + +**Response**: +```json +{ + "total": 2, + "notifications": [ + { + "taskId": "task-abc", + "state": "working", + "timestamp": "2025-01-15T10:30:00Z" + }, + { + "taskId": "task-abc", + "state": "completed", + "timestamp": "2025-01-15T10:30:07Z", + "has_message": true + } + ] +} +``` + +**When to use**: Monitoring dashboards, audit logs, or debugging push notification delivery. + +--- + +## Agent Card Discovery + +Before using any pattern, clients should discover the agent's capabilities: + +``` +GET /.well-known/agent-card.json +``` + +**Response**: +```json +{ + "name": "Brand Search Optimization Agent", + "description": "Analyzes brand keywords, search rankings, and competitive positioning", + "url": "http://localhost:8080", + "version": "1.0.0", + "capabilities": { + "streaming": true, + "pushNotifications": true, + "stateTransitionHistory": true + }, + "skills": [ + { + "id": "brand-search-optimization", + "name": "Brand Search Optimization", + "description": "Analyzes brand search performance..." + } + ] +} +``` + +The `capabilities` object tells the client which patterns are available. A well-behaved client should check these before attempting streaming or push notifications. + +--- + +## State Machine + +All patterns share the same task state machine: + +``` +submitted → working → completed + ↘ failed + ↘ canceled + ↘ input-required +``` + +- **submitted**: Task received and queued +- **working**: Agent actively processing (may appear multiple times with SSE) +- **completed**: Final response ready with artifacts +- **failed**: Processing error occurred +- **canceled**: Task was canceled +- **input-required**: Agent needs more information from the user (multi-turn) + +--- + +## Implementation Notes + +### CLI Test Client (`cli_test.py`) + +The CLI client implements all 4 patterns in ~250 lines of Python. Key implementation details: + +- **httpx** for HTTP client (async-capable, streaming-capable) +- **SSE parsing**: Manual line-by-line parsing of `data:` prefixed events +- **Push notifications**: Registers inline `pushNotificationConfig` with the message +- **Webhook server**: Runs as part of the A2A client agent on `/a2a/webhook` + +### A2A Client Library (`brand_intelligence_advisor/tools/a2a_client.py`) + +The reusable A2A client wraps all patterns into clean async methods: + +```python +from brand_intelligence_advisor.tools.a2a_client import A2AClient + +client = A2AClient("http://localhost:8080") + +# Discover +card = await client.discover() + +# Sync (ping) +result = await client.send_message("Analyze Nike") + +# Stream (SSE) +async for event in client.stream_message("Analyze Nike"): + print(event.text) + +# Push (webhook) +result = await client.send_with_push("Analyze Nike", "http://localhost:3978/a2a/webhook") +``` diff --git a/samples/python/m365-agents-sdk-a2a-patterns/docs/ARCHITECTURE.md b/samples/python/m365-agents-sdk-a2a-patterns/docs/ARCHITECTURE.md new file mode 100644 index 00000000..dbbc2bcf --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/docs/ARCHITECTURE.md @@ -0,0 +1,250 @@ +# Architecture Documentation + +## System Overview + +The Brand Search Optimization Agent is a multi-agent system built with Google ADK that analyzes retail product SEO and provides optimization recommendations. + +## Agent Architecture + +### Root Agent Orchestration + +``` +Root Agent (brand_search_optimization) +├── BigQuery Connector Tool +│ └── Retrieves product data from public dataset +│ +├── Keyword Finding Agent +│ └── Extracts high-value keywords from product titles +│ +├── Search Results Agent +│ ├── SerpAPI Connector Tool (primary) +│ └── Web Scraping via Selenium (fallback) +│ +└── Comparison Root Agent + ├── Generator Agent + │ └── Analyzes keyword usage patterns + └── Critic Agent + └── Provides actionable SEO recommendations +``` + +## Multi-Agent Design Pattern + +**Pattern:** Router Agent + +The root agent acts as an intelligent router, directing the conversation through specialized sub-agents based on workflow state: + +1. **Category Selection** → BigQuery Tool +2. **Keyword Extraction** → Keyword Finding Agent +3. **Competitor Research** → Search Results Agent +4. **SEO Analysis** → Comparison Root Agent + +## Data Flow + +``` +┌─────────────┐ +│ User │ +│ "Nike │ +│ socks" │ +└──────┬──────┘ + │ + ▼ +┌──────────────────────────┐ +│ Root Agent │ +│ (Orchestration) │ +└──────┬───────────────────┘ + │ + ▼ +┌──────────────────────────┐ +│ BigQuery Tool │ +│ Query: brand="Nike" │ +│ category="Socks" │ +└──────┬───────────────────┘ + │ + ▼ Products (13 items) +┌──────────────────────────┐ +│ Keyword Finding Agent │ +│ Extract: "moisture │ +│ wicking", "cushioned" │ +└──────┬───────────────────┘ + │ + ▼ Keywords +┌──────────────────────────┐ +│ Search Results Agent │ +│ ├─ Try: SerpAPI │ +│ └─ Fallback: Selenium │ +└──────┬───────────────────┘ + │ + ▼ Competitor data +┌──────────────────────────┐ +│ Comparison Agent │ +│ ├─ Generator: Patterns │ +│ └─ Critic: SEO Tips │ +└──────┬───────────────────┘ + │ + ▼ +┌─────────────┐ +│ User │ +│ SEO Report │ +└─────────────┘ +``` + +## A2A Integration + +### Protocol Implementation + +The agent exposes the A2A protocol via the simplified `to_a2a()` utility: + +```python +# adk-agent/run_a2a.py +from google_adk.a2a import to_a2a + +root_agent = agent_config.build_root_agent() +app = to_a2a(root_agent) +``` + +### Endpoints + +| Endpoint | Purpose | Method | +|----------|---------|--------| +| `/.well-known/agent-card.json` | Agent metadata | GET | +| `/invoke` | Conversation API | POST | +| `/health` | Health check | GET | + +### Agent Card Structure + +```json +{ + "name": "Brand Search Optimization Agent", + "description": "Analyzes product SEO and provides optimization recommendations", + "capabilities": ["conversation", "multi-turn"], + "authentication": "none", + "endpoints": { + "invoke": "https://your-url/invoke" + } +} +``` + +## Technology Stack + +### Core Framework +- **Google ADK** 1.23.0 - Multi-agent orchestration +- **Gemini 2.0 Flash** - LLM (Forever Free tier) +- **Uvicorn** - ASGI server for A2A protocol + +### Data Sources +- **BigQuery** - Public dataset (thelook_ecommerce) +- **SerpAPI** - Production competitor data +- **Selenium** - Web scraping fallback + +### Dependencies +```toml +google-generativeai = "^0.8.3" +google-adk = {extras = ["a2a"], version = "^1.23.0"} +google-cloud-bigquery = "^3.27.0" +uvicorn = "^0.32.1" +google-search-results = "^2.4.2" +selenium = "^4.27.1" +``` + +## State Management + +### Conversation State +The agent maintains state across turns using ADK's built-in session management: + +- **User context**: Brand, category, keywords +- **Workflow stage**: Category selection → Keywords → Competitors → Report +- **Previous responses**: Enables "continue" command + +### Stateless A2A +Each A2A request includes full conversation history in the payload, making the server stateless for horizontal scaling. + +## Performance Characteristics + +### Timing Breakdown +- Category selection: 3-5s +- Keyword extraction: 5-8s +- Competitor search: 10-15s +- SEO analysis: 7-10s + +**Total workflow:** 25-35s + +### Resource Usage +- **Memory:** ~200MB per request +- **CPU:** Minimal (LLM calls are external) +- **Network:** 2-5MB per full workflow + +### Scaling Considerations +- **Stateless design** → Easy horizontal scaling +- **BigQuery caching** → Reduces query costs +- **SerpAPI rate limits** → 100 free searches/month +- **Free tier limits** → 1500 Gemini requests/day + +## Security Architecture + +### Secrets Management +- API keys in `.env` (never committed) +- Environment variables in production +- `.gitignore` protects sensitive files + +### Network Security +- Dev Tunnel for local testing only +- HTTPS enforced in production +- CORS configured for Copilot Studio + +All credentials are loaded from `.env` files (gitignored). See env templates for required variables. + +## Deployment Patterns + +### Local Development +``` +User → test_demo.py / cli_test.py → Client Agent → A2A → ADK Agent → Tools +``` + +### M365 Integration +``` +User → Teams / WebChat → M365 Agents SDK → A2A → ADK Agent → Tools +``` + +### Production +``` +User → Teams → Azure Bot Service → Client Agent → HTTPS → Cloud Run → ADK Agent +``` + +## A2A Client Agent Architecture + +The client agent uses Semantic Kernel for LLM orchestration: + +``` +brand_intelligence_advisor/ +├── agent.py # M365 SDK AgentApplication, message handlers +├── orchestrator.py # SK ChatCompletionAgent + BrandToolsPlugin +├── prompt.py # System prompt (advisor persona) +├── server.py # aiohttp server (M365 + webhook endpoints) +└── tools/ + ├── a2a_client.py # A2A protocol client (ping, stream, push) + └── brand_advisor.py # Domain knowledge, query parsing, formatting +``` + +### SK Tool Flow +1. User message → `agent.py` → `orchestrator.process_message()` +2. SK reasons about intent → calls `@kernel_function` tools +3. `analyze_brand()` → `A2AClient.send_message()` / `.stream_message()` / `.send_with_push()` +4. SK synthesizes raw data into strategic response + +## Cost Optimization + +### Free Tier Strategy +- **Gemini:** 1500 requests/day (Forever Free) +- **BigQuery:** Public dataset (no cost) +- **SerpAPI:** 100 searches/month free + +**Result:** 1000+ brand audits/day at $0 cost + +## References + +- [A2A Protocol Specification](https://google.github.io/a2a/#/) +- [Google ADK Documentation](https://google.github.io/adk-docs/) +- [M365 Agents SDK](https://pypi.org/project/microsoft-agents-hosting-core/) +- [Semantic Kernel](https://pypi.org/project/semantic-kernel/) +- [BigQuery Public Datasets](https://cloud.google.com/bigquery/public-data) +- [SerpAPI Documentation](https://serpapi.com/docs) diff --git a/samples/python/m365-agents-sdk-a2a-patterns/env.example b/samples/python/m365-agents-sdk-a2a-patterns/env.example new file mode 100644 index 00000000..537b3fb4 --- /dev/null +++ b/samples/python/m365-agents-sdk-a2a-patterns/env.example @@ -0,0 +1,29 @@ +# Choose Model Backend: 0 -> ML Dev, 1 -> Vertex +GOOGLE_GENAI_USE_VERTEXAI=0 + +# ML Dev backend config +# Visit https://aistudio.google.com to get this key +GOOGLE_API_KEY= + +# Vertex backend config +# Only a project ID is required - we use Google's public BigQuery dataset! +GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_ID_HERE +GOOGLE_CLOUD_LOCATION=us-central1 +# Using gemini-2.0-flash for faster responses with zero-billing (Google's "Forever Free" tier) +MODEL="gemini-2.0-flash" + +# No dataset setup needed! Uses public dataset: bigquery-public-data.thelook_ecommerce +# Test with brands like: Nike, Adidas, Levi's, Calvin Klein, Columbia, Puma + +# IMPORTANT: Setting this flag to 1 will disable web driver +# Set to 0 to enable real Google Shopping scraping with Firefox +# Install Firefox first: winget install Mozilla.Firefox +DISABLE_WEB_DRIVER=0 + +# SerpAPI: Production-quality competitor data (no bot detection) +# Get free API key (100 searches/month) at: https://serpapi.com/ +# Leave empty to use web scraping fallback (may hit CAPTCHAs) +SERPAPI_KEY= + +# Staging bucket name for ADK agent deployment to Vertex AI Agent Engine (Do not include "gs://" for your bucket.) +STAGING_BUCKET=YOUR_VALUE_HERE