We spend 4.5 billion hours annually waiting in airports. Currently, the experience is fragmented:
- Static Maps: Blueprints that don't know who you are or what you like.
- Disconnected Data: Flight apps tracking planes, Yelp tracking food, but nothing connecting the two.
- Cognitive Load: Trying to find a quiet spot with an outlet and good coffee in a new terminal is a research project.
Airports shouldn't be valid "waiting rooms." They should be responsive environments.
LayoverOS is not a chatbot. It is an Agentic Coordinator that sits between the traveler and the airport's infrastructure.
It transforms a static location ("I am at SFO") into a dynamic set of actionable opportunities.
- "I have 2 hours" → Agent finds a lounge.
- "My flight is delayed" → Agent suggests a sleeping pod near the new gate.
- "I'm hungry" → Agent filters for open restaurants in your terminal, using Vector Search to match your preferences (e.g., "healthy," "fast").
Most AI bots answer questions. LayoverOS takes action. Through our Supervisor Node architecture, the system detects intent not just to chat, but to transact.
- Example: A user asking "Can I buy a pass?" triggers the Bursar Node, which bypasses the LLM's text output and directly commands the Frontend to render a secure Payment Modal. This bridge between Natural Language Intent and React Component Rendering is the core innovation.
We built a sophisticated multi-agent system orchestrated by LangChain/LangGraph, using MongoDB Atlas as the central nervous system.
We utilize MongoDB beyond simple storage. It acts as the shared memory for our agent fleet.
-
Vector Search (Semantic Retrieval):
- Data: We ingested 160+ real amenity data points from SFO, JFK, and DEN.
- Embeddings: Used Voyage AI (
voyage-3-large) to generate high-fidelity 1024-dimensional vectors. - Index: Configured an Atlas Vector Search index using
cosinesimilarity with accurate Metadata Filtering. - Why MongoDB? Unlike Pinecone, we needed to store the metadata (Opening Hours, Terminal ID) right next to the vectors. This allowed us to perform Hybrid Search—finding "Coffee" (Vector) that is also "Open Now" (Boolean Filter) in a single query.
-
Graph Checkpointing (Long-Term Memory):
- Using
MongoDBSaverwith LangGraph, we persist theAgentStateframe-by-frame. - Impact: This enables true Offline Resilience. A user can lose cell service in an elevator, reload the page 5 minutes later, and the Agent acts as if no time passed, remembering the exact context.
- Using
We moved beyond simple "prompt engineering" to Flow Engineering. The agent_graph.py defines a state machine with strict typing:
class AgentState(TypedDict):
messages: Annotated[List[str], operator.add]
user_location: str # e.g., "Terminal 2"
airport_code: str # e.g., "SFO", "JFK", "DEN"
flight_number: str # e.g., "UA400" (Persisted)
next_step: str- 🤖 Supervisor Node: The dispatcher. It uses regex pattern matching (
[A-Z]{2}\d{3}) and keyword density analysis to route traffic. IF it detects a User changing airports ("I just landed at JFK"), it updates the globalairport_codestate instantly. - 🔍 Scout Node: The researcher. Steps:
- Concierge Check: If the query is vague ("I'm hungry"), IT STOPS. It asks clarifying questions ("Which terminal?") to save compute.
- Vector Lookup: Queries MongoDB.
- Synthesis: Uses Llama 3 to turn raw JSON results into a friendly recommendation.
✈️ Flight Node: Tracks real-time status. It has "Sticky Memory"—once you mentionUA400, it remembers it for the rest of the session until you clear it.
- Aesthetics: "Glassmorphism" Design System using Tailwind CSS and Framer Motion.
- Interactive Blueprint: Built a custom Scalable Vector Graphic (SVG) map engine. It is not an image; it is a DOM structure. This allows us to programmatically highlight "Gate F12" or "Starbucks" on the map in response to AI events.
- Hybrid Rendering: The chat is fully streaming, but critical actions (Payments, Maps) are rendered as Client Components triggered by specific tokens (
[PAYMENT_REQUIRED]) hidden in the AI's response stream.
Challenge: LLMs love to invent gates that don't exist.
Solution: We implemented Strict RAG (Retrieval Augmented Generation). The Scout Node is forbidden from answering from its training data. It can ONLY synthesize answers from the found_items list returned by MongoDB. If MongoDB returns empty, the Agent says "I don't know," rather than lying.
Challenge: During stress testing, our primary model (Llama 3.3 70B) began rejecting requests due to provider overload (412 Precondition Failed).
Solution: We built a Self-Healing Fallback Mechanism:
- The system wraps every LLM call in a
try/catchblock. - If the LLM fails (Network/Auth/RateLimit), the system bypasses the brain and returns the raw structured data from MongoDB directly to the user.
- Result: The user always gets their answer (e.g., list of coffee shops), even if the "personality" of the bot is temporarily offline.
- Python 3.10+
- Node.js 18+
- MongoDB Atlas Cluster (M0 or higher)
cd backend
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Create .env file with:
# MONGO_URI=...
# FIREWORKS_API_KEY=...
# VOYAGE_API_KEY=...
python3 api.py
# Server runs on http://localhost:8000cd frontend
npm install
npm run dev
# Dashboard available at http://localhost:3000Built with ❤️ for the MongoDB AI Hackathon