This repository contains a production-ready, reusable template for building powerful, data-driven AI agents. It's designed for rapid development and deployment of customized bots for any client, using a Retrieval-Augmented Generation (RAG) architecture with a Knowledge Graph and Vector Store.
- Dynamic RAG Architecture: Combines a Neo4j Knowledge Graph for precise, factual queries (like rules and fees) and a Supabase Vector Store for general, contextual information retrieval.
- Configuration-Driven: Onboard new clients by changing configuration files (
.env
,settings.py
) and adding data—not by rewriting the core logic. - Pluggable Persona: Easily define and modify the bot's name, personality, and core instructions in a simple text file (
agent/persona.prompt
). - Modular Custom Tools: Extend the bot's capabilities by adding custom Python functions for business logic (e.g., booking appointments, capturing leads, escalating to support).
- Automated Data Ingestion: A smart script processes client-provided Markdown files, automatically chunks the data, and populates both the graph and vector databases.
- Production-Ready API: Built with FastAPI, including API key security, asynchronous processing, and concurrency limiting to handle real-world traffic.
- Powered by Google Gemini: Leverages Google's powerful
gemini-2.5-flash
model for agent reasoning andembedding-001
for high-quality vector embeddings.
- Backend: Python, FastAPI
- AI Orchestration: LangChain
- LLM & Embeddings: Google Gemini
- Databases:
- Neo4j (Graph Database)
- Supabase (PostgreSQL with pgvector for Vector Storage)
- Deployment: Uvicorn, Procfile for Heroku/Render
The project is organized into modules with a clear separation of concerns, making it easy to maintain and scale.
/zappies-ai-bot-template/
|
├── 📂 agent/ # Core AI agent logic and persona
| ├── 📄 agent_factory.py # Builds the agent executor
| └── 📄 persona.prompt # <-- EDIT THIS: Define bot's personality
|
├── 📂 api/ # FastAPI server and endpoints
| └── 📄 server.py # API logic, security, and chat endpoint
|
├── 📂 config/ # Centralized application settings
| └── 📄 settings.py # Loads and manages configuration
|
├── 📂 data/ # <-- ADD FILES HERE: Client's knowledge base
| └── 📄 example.md # Add your client's markdown files here
|
├── 📂 ingestion/ # Data processing and loading scripts
| └── 📄 ingest.py # Script to populate the databases
|
├── 📂 tools/ # Custom capabilities for the bot
| ├── 📄 action_schemas.py # <-- EDIT THIS: Pydantic models for tool inputs
| └── 📄 custom_tools.py # <-- EDIT THIS: Python functions for business logic
|
├── 📄 .env.example # Template for environment variables
├── 📄 .gitignore
├── 📄 main.py # Main entry point to run the API
├── 📄 Procfile # For deployment to services like Heroku
├── 📄 README.md
└── 📄 requirements.txt # Python dependencies
Follow these steps to get your first bot running.
- Python 3.9+
- Access to Google AI Studio, Supabase, and Neo4j (AuraDB is a great cloud option).
git clone <your-repository-url>
cd zappies-ai-bot-template
# For Windows
python -m venv venv
venv\Scripts\activate
# For macOS/Linux
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- Copy the example environment file:
cp .env.example .env
- Open the
.env
file and fill in the credentials for your services:API_SECRET_KEY
: A strong, random key you create to protect your API.GOOGLE_API_KEY
: Your API key from Google AI Studio.SUPABASE_URL
&SUPABASE_SERVICE_KEY
: From your Supabase project's API settings.NEO4J_URI
,NEO4J_USERNAME
,NEO4J_PASSWORD
: From your Neo4j database instance.
This is the standard workflow for configuring the template for a new client.
- Delete the
example.md
file inside the/data/
directory. - Add all of your client's knowledge base documents as Markdown (
.md
) files inside the/data/
directory.
This script will read your Markdown files, process them, and load them into your Supabase and Neo4j databases.
- From the root of the project, run:
python -m ingestion.ingest
- The script will track file changes, so you only need to run it again when you add, update, or remove knowledge files.
- Open
agent/persona.prompt
. - Edit the bot's name, personality description, and any special rules to match your client's brand voice.
- Define Inputs: Open
tools/action_schemas.py
. Create or modify the Pydantic classes to define the arguments for your bot's custom actions (e.g., collecting a name, email, and reason for contact). - Implement Logic: Open
tools/custom_tools.py
. Write the Python functions that perform the actions. This is where you would integrate with a client's CRM, calendar API, or other external systems. The template includes examples for lead capture and escalation.
- Start the application from the root directory:
python main.py
- The API server will start, typically on
http://127.0.0.1:8000
.
Interact with your running bot by sending POST requests to the /chat
endpoint.
-
Endpoint:
POST /chat
-
Headers:
Content-Type: application/json
x-api-key: YOUR_API_SECRET_KEY
(The one you set in your.env
file)
-
Request Body:
{ "conversation_id": "a_unique_id_for_the_user_session", "query": "What are your operating hours?" }
-
Success Response (200 OK):
{ "response": "The bot's generated answer will be here." }
The included Procfile
is configured for easy deployment on platforms like Heroku or Render.
web: uvicorn api.server:app --host 0.0.0.0 --port $PORT
This project is licensed under the MIT License. See the LICENSE
file for details.