Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
149 changes: 149 additions & 0 deletions examples/petpassport/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
# Pet Passport: Pet-friendly route planner

This directory contains the **Pet Passport** application, a demo of an AI Agent using the Model Context Protocol (MCP) to combine data analysis and real-world location services.

## Demo Overview

Pet Passport helps users plan a perfect day out with their dog based on breed popularity in New York City. The agent uses a "Macro-to-Micro" reasoning chain:
1. **Strategic Discovery (BigQuery):** Identifies the NYC Zip Code with the highest population for a specific breed.
2. **Local Execution (Maps):** Uses that Zip Code as a location bias to find "pet friendly cafes" and "dog parks".
3. **Itinerary Generation:** Combines the data to create a "Pet Passport" itinerary.

The agent is built using the `google-adk` framework and powered by `gemini-3.1`.

## Dataset

This demo uses the [NYC Dog Licensing Dataset](https://data.cityofnewyork.us/Health/NYC-Dog-Licensing-Dataset/nu7n-tubp) from NYC Open Data. It contains records of licensed dogs in New York City.

## Project Structure

```text
petpassport/
├── petpassport/
│ ├── __init__.py
│ ├── agent.py # Agent definition using ADK
│ ├── tools.py # MCP Tool configuration (BigQuery & Maps)
│ └── main.py # FastAPI application exposing the agent
├── pyproject.toml # Project dependencies
└── README.md # This documentation
```

## Prerequisites

* **Python 3.13+**
* **Google Cloud Project** with access to BigQuery and Maps MCP endpoints.
* **Environment Variables** (Store these in a `.env` file in the project root):
* `MAPS_API_KEY`: Your Google Maps API key.
* `GOOGLE_CLOUD_PROJECT`: Your Google Cloud project ID.

## Deployment Guide

Follow these steps to set up and run the demo.

### 1. Authenticate with Google Cloud

Set your active Google Cloud project and authenticate. This is required for the ADK to access BigQuery.

```bash
gcloud config set project [YOUR-PROJECT-ID]
gcloud auth application-default login --project [YOUR-PROJECT-ID]
```

*Note: If you encounter errors about a different project during authentication, you can bypass it by disabling the quota project and setting it manually:*
```bash
gcloud auth application-default login --disable-quota-project
gcloud auth application-default set-quota-project [YOUR-PROJECT-ID]
```

### 3. Configure Environment

Run the environment setup script. This script will:
* Enable necessary Google Cloud APIs (Maps, BigQuery, remote MCP).
* Create a restricted Google Maps Platform API Key.
* Create a `.env` file with required environment variables.

```bash
chmod +x setup/setup_env.sh
./setup/setup_env.sh
```

### 4. Provision BigQuery

Run the setup script. This script automates the following:
* Creates a Cloud Storage bucket.
* Uploads the CSV data files.
* Creates the `nyc_dogs` BigQuery dataset.
* Loads the data into BigQuery table `licenses`.

```bash
chmod +x setup/setup_bigquery.sh
./setup/setup_bigquery.sh
```

## Installation

1. Navigate to the project directory:
```bash
cd examples/petpassport
```

2. Create a virtual environment:
```bash
python3 -m venv .venv
```

3. Activate the virtual environment:
```bash
source .venv/bin/activate
```

4. Install dependencies:
```bash
pip install google-adk==1.28.0 python-dotenv google-genai pillow
```

## Running the Application Locally

To run the application locally on your machine:

1. **Install Uvicorn** (if not already installed):
```bash
pip install uvicorn
```

2. **Start the FastAPI server:**
```bash
uvicorn petpassport.main:app --reload
```

3. **Open the UI:**
Navigate to `http://127.0.0.1:8000/ui/` in your browser to interact with the Pet Passport interface.

**Sample prompt:** "I have a Labrador Retriever. Where should we go in NYC?"

## Deploying to Cloud Run

To deploy the Pet Passport agent to Google Cloud Run:

1. **Ensure you are in the project directory**:
```bash
cd examples/petpassport
```

2. **Deploy to Cloud Run**:
Use the following pure `gcloud` command to build and deploy the application to Cloud Run. Replace `[YOUR-REGION]` with your desired region (e.g., `us-west1`).

```bash
gcloud run deploy petpassport \
--source petpassport \
--region [YOUR-REGION] \
--allow-unauthenticated \
--labels dev-tutorial=google-mcp
```

*Note: This command assumes you have the environment variables `MAPS_API_KEY` and `GOOGLE_CLOUD_PROJECT` set in your current shell.*

3. **Permissions**:
Ensure the Cloud Run service account has the following IAM roles:
* `BigQuery Data Viewer` and `BigQuery Job User` (to query BigQuery).
* `Storage Object Admin` on the bucket `pet-passport-data-$PROJECT_ID` (to upload PDFs).
23 changes: 23 additions & 0 deletions examples/petpassport/petpassport/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
FROM python:3.13-slim

WORKDIR /app

# Install system dependencies (required for some Python packages)
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*

# Install Python dependencies
# The adk deploy command looks for requirements.txt in the agent directory
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Expose port (Cloud Run defaults to 8080)
ENV PORT=8080
EXPOSE 8080

# Run using uvicorn pointing to main.py in the current directory
CMD ["sh", "-c", "uvicorn main:app --host 0.0.0.0 --port $PORT"]
2 changes: 2 additions & 0 deletions examples/petpassport/petpassport/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from . import agent
from . import tools
54 changes: 54 additions & 0 deletions examples/petpassport/petpassport/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import os
import dotenv
import tools
from google.adk.agents import LlmAgent

dotenv.load_dotenv()

PROJECT_ID = os.getenv('GOOGLE_CLOUD_PROJECT', 'project_not_set')

maps_toolset = tools.get_maps_mcp_toolset()
bigquery_toolset = tools.get_bigquery_mcp_toolset()

root_agent = LlmAgent(
model='gemini-2.5-pro',
name='root_agent',
instruction=f"""
You are the Pet Passport Agent. Your goal is to help users find a fun walking route for their dog in NYC.

When the user provides a breed and a postal code (and optional preferences like "a cafe"), follow this flow:
1. **Strategic Discovery:** Use BigQuery to find the most popular neighborhood for that breed in NYC.
2. **Personalization:** Consider the user's postal code to suggest a walking path that is closer to them or in a relevant area, balancing breed popularity hubs with proximity.
3. **Local Execution:** Use Maps to build a walking route with specific places (parks, cafes) based on user preferences.

**STATE AWARENESS:** If the user asks for a *new* or *different* walking path than one suggested before, you MUST suggest a different set of locations or a different route.

Run all BigQuery query jobs using the project ID: {PROJECT_ID}.

Here is a concrete example of how to query the dog license data using the project ID variable and the correct schema:
```sql
SELECT ZipCode, COUNT(*) AS count
FROM `{PROJECT_ID}.nyc_dogs.licenses`
WHERE BreedName = 'Labrador Retriever'
GROUP BY ZipCode
ORDER BY count DESC
LIMIT 1;
```

Use the BigQuery toolset to query the dog license data.
Use the Maps toolset to find places and calculate routes.

**CRITICAL RULE FOR PLACES:** `search_places` returns AI-generated place data summaries along with `place_id`, latitude/longitude coordinates, and map links for each place. You must carefully associate each described place to its provided `place_id` or `lat_lng`. You MUST include the Google Maps links in your final itinerary response, and add relevant details about the places (e.g., rating, food type).

**CRITICAL ROUTING RULE:** To avoid hallucinating, you MUST provide the `origin` and `destination` using the exact `place_id` string OR `lat_lng` object returned by `search_places`. Do NOT guess or hallucinate an `address` or `place_id` if you do not know the exact name. You MUST use the exact `place_id` and names returned by `search_places`.

**NO DIRECTIONS LINKS:** You must NOT include a Google Maps directions link (e.g., `https://www.google.com/maps/dir/...`) in your final response. Only provide links to individual places using the `place_id` or direct links.

**IMAGE UPLOAD RULE:** If the user's message indicates they uploaded an image (e.g., contains `[Pet Photo Uploaded: /tmp/... ]`), you MUST use that image path as the `image_path` argument when calling `generate_pet_passport_photo` to generate a new image of the dog in the suggested location! DO NOT just use the uploaded image path as the final result, but use it as a reference to generate a new one! If no image was uploaded, DO NOT generate or suggest any images!

**REASONING RULE:** When you query BigQuery to find a popular area for the breed, you MUST explain in your final response *why* you chose that area. Include the exact count of that breed found in that neighborhood from the license table, and if it's not the highest in NYC, mention its rank.

After generating the itinerary, you MUST call the `save_pet_passport` tool to save this path to the user's profile. Pass `demo-user` as the `user_id`, the breed, the identified popular ZipCode or neighborhood as the location, a clean markdown version of the itinerary (not the full conversational response) as `route_details`, and the list of image paths (either the uploaded one or empty). The summary should include the popular breed count fact, the list of places with links and details (like rating, description from maps), and a short description of the walk.
""",
tools=[maps_toolset, bigquery_toolset, tools.generate_pet_passport_photo, tools.save_pet_passport]
)
96 changes: 96 additions & 0 deletions examples/petpassport/petpassport/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
from google.adk.cli.fast_api import get_fast_api_app
from fastapi.staticfiles import StaticFiles
from fastapi import FastAPI, Request, HTTPException, File, UploadFile
import os
import json
import shutil
from google.cloud import storage

# Create the standard ADK app in headless mode (web=False)
# auto_create_session=True is CRITICAL for custom UIs that generate random session IDs.
app = get_fast_api_app(agents_dir=".", web=False, auto_create_session=True)

# Mount custom static files
script_dir = os.path.dirname(os.path.abspath(__file__))
static_dir = os.path.join(script_dir, "static")

# Ensure the static directory exists
os.makedirs(static_dir, exist_ok=True)

from fastapi.responses import FileResponse

@app.get("/ui/")
async def get_ui_index():
index_path = os.path.join(static_dir, "index.html")
if os.path.exists(index_path):
return FileResponse(index_path)
raise HTTPException(status_code=404, detail="index.html not found")

app.mount("/ui", StaticFiles(directory=static_dir), name="ui")
app.mount("/tmp", StaticFiles(directory="/tmp"), name="tmp")

PROJECT_ID = os.getenv('GOOGLE_CLOUD_PROJECT', 'project_not_set')
BUCKET_NAME = f"pet-passport-data-{PROJECT_ID}"
print(f"BUCKET_NAME determined as: {BUCKET_NAME}")

import time

@app.post("/api/upload")
async def upload_file(file: UploadFile = File(...)):
try:
filename = f"upload_{int(time.time())}_{file.filename}"
file_path = f"/tmp/{filename}"
with open(file_path, "wb") as buffer:
shutil.copyfileobj(file.file, buffer)

return {"file_path": file_path}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))

def get_storage_client():
return storage.Client()

@app.get("/api/paths")
async def get_paths(user_id: str):
try:
client = get_storage_client()
bucket = client.bucket(BUCKET_NAME)
blob = bucket.blob(f"user-{user_id}.json")

if not blob.exists():
return []

content = blob.download_as_text()
return json.loads(content)
except Exception as e:
import traceback
traceback.print_exc()
raise HTTPException(status_code=500, detail=str(e))

@app.post("/api/paths")
async def update_path(user_id: str, path_data: dict):
try:
client = get_storage_client()
bucket = client.bucket(BUCKET_NAME)
blob = bucket.blob(f"user-{user_id}.json")

paths = []
if blob.exists():
content = blob.download_as_text()
paths = json.loads(content)

path_id = path_data.get('id')
updated = False
for i, p in enumerate(paths):
if p.get('id') == path_id:
paths[i].update(path_data)
updated = True
break

if not updated:
paths.append(path_data)

blob.upload_from_string(json.dumps(paths))
return {"status": "success"}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
5 changes: 5 additions & 0 deletions examples/petpassport/petpassport/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
google-adk==1.28.0
python-dotenv
google-genai
pillow
reportlab
Loading