Project Title: EchoVoice: Customer Personalization Orchestrator
Challenge Solved: Compliant, on-brand personalization and A/B/n experimentation in a regulated domain.
EchoVoice is a multi-agent AI personalization platform designed for regulated industries. It delivers safe, on-brand, traceable customer messaging through a coordinated set of specialized agents working together inside a transparent and auditable orchestration pipeline.
This repository provides a prototype scaffold for local development — including a LangGraph-based orchestrator, agent suite, mock retrieval (text-target) data, and a frontend stub for auditability.
EchoVoice is built to demonstrate how modern enterprises can combine retrieval, guardrails, multi-agent orchestration, audit trails, and compliance automation to deliver personalization safely at scale.
“AI personalization is powerful — but unsafe when uncontrolled.” AI models hallucinate, violate regulation, drift off-brand, or generate misleading claims. This prototype shows how to control AI-powered personalization through enforcement architecture.
EchoVoice solves four major industry problems:
- Regulatory risk — customers must never receive false, unapproved, non-compliant messaging.
- Brand drift & inconsistency — large models generate tone-unsafe or off-brand copy.
- Lack of traceability — compliance teams require citations, evidence, logs, and overrides.
- Experimentation bottlenecks — brands need fast A/B/n testing but with safety guarantees.
EchoVoice addresses all four using a structured, multi-agent LangGraph design.
EchoVoice implements a 4-phase closed-loop orchestration pipeline, with agents collaborating to generate, inspect, correct, and approve personalized messages:
- Segmentation → Pick best segment for the user’s goal.
- Retrieval → Pull grounded facts from product knowledge.
- Generation + Compliance → Produce message variants and enforce safety.
- Experimentation + Feedback → Score variants, pick winner, close the loop.
Each phase is implemented as an independent LangGraph, all connected in a master orchestrator.
- 🚀 EchoVoice: Customer Personalization Orchestrator
📺 Watch a video overview of the app.
This repository demonstrates EchoVoice — a compliance-first, multi-agent personalization orchestrator. It illustrates how retrieval-augmented generation (RAG) workflows, model orchestration, and audit trails can be combined to produce safe, on‑brand customer messaging.
The prototype uses Azure OpenAI Service (example model: gpt-4.1-mini) together with Azure AI Search for indexing and retrieval. The repo includes sample data and mocked services so you can run the prototype locally and inspect retrieval sources, model outputs, and the associated audit metadata.
- Chat (multi-turn) and Q&A (single turn) interfaces
- Inline citations and model-thought metadata rendering
- Built-in UI controls for experimental parameters
- Azure AI Search indexing and retrieval
- Multimodal support (via optional features)
- Optional speech input/output
- Optional Microsoft Entra authentication
- Application Insights tracing
- End-to-end demonstrator of a compliance-first personalization system
That's the complete, four-phase LangGraph architecture for your AI Marketing Personalization Engine! You've successfully mapped the conceptual design into four distinct, interconnected graphs that handle conditional logic, RAG, compliance, and feedback.
Here is a summary of the complete end-to-end workflow, illustrating how the four phases (represented by your four uploaded diagrams) link together to form a closed-loop system:
[📺Goal Driven Segmentation(https://slidesgpt.com/presentation/oluXpOrd0wwdBKIhoxvu)]
- Goal: Determine the single, most relevant segment for the current campaign goal.
- Starting Point: The graph begins at the Goal Router node, which receives the campaign objective.
- Key Logic: Conditional Routing based on the goal (e.g., RFM for churn, Intent for real-time). Only one specialized agent runs.
- Output: The Priority & Output node provides the final, definitive
prioritized_segmentand itssegment_description(the explainable reason).
- Goal: Retrieve accurate, citable product information relevant to the segment's needs.
- Key Logic: Corrective RAG Loop.
- The Contextual Query Generator translates the segment description into search terms.
- The Relevance Grader checks the retrieved documents.
- Conditional Loop: If documents are irrelevant, the flow routes to Self-Correction (Rewrite Query) and loops back for a retry.
- Output: The Citation Formatter provides the final, clean
content_contextand a list ofcitation_sources.
- Goal: Create personalized message variants and ensure 100% adherence to brand safety policy.
- Key Logic: Mandatory Safety Loop.
- The AI Message Generator creates A/B/n variants using the segment and the citable content.
- The Safety & Compliance Agent checks the variants against the policy rule engine.
- Conditional Loop: If the message is NOT compliant, the flow routes to the Automated Rewrite Node and loops back to the Compliance Agent for a re-check.
- Output: An approved list of safe and personalized message variants.
- Goal: Simulate performance to select the winning message and feed performance data back for continuous improvement.
- Key Logic: Simulation and Dual Exit.
- The A/B/n Experiment Simulator predicts the CTR and Conversion Lift for each variant.
- The Winning Variant Selector chooses the best-performing message.
- Output & Exit: The Feedback Processor splits the workflow into two paths:
- Deployment Queue: Sends the winning, approved message to the external system for live use.
- Feedback Loop: Sends the structured performance data and segment details back to Phase 1 or Phase 3 for model retraining and optimization.
This closed-loop system represents a powerful, enterprise-grade AI solution and maps the LangGraph diagrams into an operational pipeline for safe, auditable personalization.
IMPORTANT: In order to deploy and run this example, you'll need:
- Azure account. If you're new to Azure, get an Azure account for free and you'll get some free Azure credits to get started. See guide to deploying with the free trial.
- Azure account permissions:
- Your Azure account must have
Microsoft.Authorization/roleAssignments/writepermissions, such as Role Based Access Control Administrator, User Access Administrator, or Owner. If you don't have subscription-level permissions, you must be granted RBAC for an existing resource group and deploy to that existing group. - Your Azure account also needs
Microsoft.Resources/deployments/writepermissions on the subscription level.
- Your Azure account must have
Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. However, you can try the Azure pricing calculator for the resources below.
- Azure Container Apps: Default host for app deployment as of 10/28/2024. See more details in the ACA deployment guide. Consumption plan with 1 CPU core, 2 GB RAM, minimum of 0 replicas. Pricing with Pay-as-You-Go. Pricing
- Azure Container Registry: Basic tier. Pricing
- Azure App Service: Only provisioned if you deploy to Azure App Service following the App Service deployment guide. Basic Tier with 1 CPU core, 1.75 GB RAM. Pricing per hour. Pricing
- Azure OpenAI: Standard tier, GPT and Ada models. Pricing per 1K tokens used, and at least 1K tokens are used per question. Pricing
- Azure AI Document Intelligence: SO (Standard) tier using pre-built layout. Pricing per document page, sample documents have 261 pages total. Pricing
- Azure AI Search: Basic tier, 1 replica, free level of semantic search. Pricing per hour. Pricing
- Azure Blob Storage: Standard tier with ZRS (Zone-redundant storage). Pricing per storage and read operations. Pricing
- Azure Cosmos DB: Only provisioned if you enabled chat history with Cosmos DB. Serverless tier. Pricing per request unit and storage. Pricing
- Azure AI Vision: Only provisioned if you enabled multimodal approach. Pricing per 1K transactions. Pricing
- Azure AI Content Understanding: Only provisioned if you enabled media description. Pricing per 1K images. Pricing
- Azure Monitor: Pay-as-you-go tier. Costs based on data ingested. Pricing
To reduce costs, you can switch to free SKUs for various services, but those SKUs have limitations. See this guide on deploying with minimal costs for more details.
azd down.
You have a few options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally if desired.
You can run this repo virtually by using GitHub Codespaces, which will open a web-based VS Code in your browser:
Once the codespace opens (this may take several minutes), open a terminal window.
A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:
-
Start Docker Desktop (install it if not already installed)
-
In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.
-
Install the required tools:
- Azure Developer CLI
- Python 3.10, 3.11, 3.12, 3.13, or 3.14
- Important: Python and the pip package manager must be in the path in Windows for the setup scripts to work.
- Important: Ensure you can run
python --versionfrom console. On Ubuntu, you might need to runsudo apt install python-is-python3to linkpythontopython3.
- Node.js 20+
- Git
- Powershell 7+ (pwsh) - For Windows users only.
- Important: Ensure you can run
pwsh.exefrom a PowerShell terminal. If this fails, you likely need to upgrade PowerShell.
- Important: Ensure you can run
-
Create a new folder and switch to it in the terminal.
-
Run this command to download the project code:
azd init -t azure-search-openai-demo
Note that this command will initialize a git repository, so you do not need to clone this repository.
The steps below will provision Azure resources and deploy the application code to Azure Container Apps. To deploy to Azure App Service instead, follow the app service deployment guide.
-
Login to your Azure account:
azd auth login
For GitHub Codespaces users, if the previous command fails, try:
azd auth login --use-device-code
-
Create a new azd environment:
azd env new
Enter a name that will be used for the resource group. This will create a new folder in the
.azurefolder, and set it as the active environment for any calls toazdgoing forward. -
(Optional) This is the point where you can customize the deployment by setting environment variables, in order to use existing resources, enable optional features (such as auth or vision), or deploy low-cost options, or deploy with the Azure free trial.
-
Run
azd up- This will provision Azure resources and deploy this sample to those resources, including building the search index based on the files found in the./datafolder.- Important: Beware that the resources created by this command will incur immediate costs, primarily from the AI Search resource. These resources may accrue costs even if you interrupt the command before it is fully executed. You can run
azd downor delete the resources manually to avoid unnecessary spending. - You will be prompted to select two locations, one for the majority of resources and one for the OpenAI resource, which is currently a short list. That location list is based on the OpenAI model availability table and may become outdated as availability changes.
- Important: Beware that the resources created by this command will incur immediate costs, primarily from the AI Search resource. These resources may accrue costs even if you interrupt the command before it is fully executed. You can run
-
After the application has been successfully deployed you will see a URL printed to the console. Click that URL to interact with the application in your browser. It will look like the following:
NOTE: It may take 5-10 minutes after you see 'SUCCESS' for the application to be fully deployed. If you see a "Python Developer" welcome screen or an error page, then wait a bit and refresh the page.
If you've only changed the backend/frontend code in the app folder, then you don't need to re-provision the Azure resources. You can just run:
azd deployIf you've changed the infrastructure files (infra folder or azure.yaml), then you'll need to re-provision the Azure resources. You can do that by running:
azd upYou can only run a development server locally after having successfully run the azd up command. If you haven't yet, follow the deploying steps above.
- Run
azd auth loginif you have not logged in recently. - Start the server:
Windows:
./app/start.ps1Linux/Mac:
./app/start.shVS Code: Run the "VS Code Task: Start App" task.
It's also possible to enable hotloading or the VS Code debugger. See more tips in the local development guide.
- In Azure: navigate to the Azure WebApp deployed by azd. The URL is printed out when azd completes (as "Endpoint"), or you can find it in the Azure portal.
- Running locally: navigate to 127.0.0.1:8000
Once in the web app:
- Try different features of the customer personalization orchestrator, etc.
- Explore citations and sources
- Click on "settings" to try different options, tweak prompts, etc.
To clean up all the resources created by this sample:
- Run
azd down - When asked if you are sure you want to continue, enter
y - When asked if you want to permanently delete the resources, enter
y
The resource group and all the resources will be deleted.
You can find extensive documentation in the docs folder:
- Deploying:
- Local development
- Customizing the app
- App architecture
- HTTP Protocol
- Data ingestion
- Evaluation
- Safety evaluation
- Monitoring with Application Insights
- Productionizing
- Alternative retrieval chat samples



