A search-and-rescue robotics platform that scouts collapsed structures, communicates with survivors, and produces a photorealistic 3D walkthrough for incident commanders — before any human enters the building.
Built at LA Hacks 2026 by Srini, Krish, Aadi, and Varun.
After earthquakes, explosions, and structural collapses, first responders face environments too unstable for immediate human entry. RIIS sends a sub-$200 ground rover in first. The rover navigates the space autonomously, identifies survivors with onboard pose detection, and engages them in their native language using real-time voice synthesis. Captured imagery is reconstructed into a photoreal 3D Gaussian Splat scene, which the incident commander walks through in VR before risking entry.
PiCar-X rover ──► WebSocket stream ──► Operator dashboard
│ │
│ YOLO detection ─────────────┤
│ │
│ Fetch.ai agents ────────────┤
│ Scout → Triage │
│ (ASI-1 SBAR) │
│ Comms + Mapping ──────────┘
│
└──► Frame archive ──► MASt3R + InstantSplat
│
scene_quest.ply
│
Meta Quest 2 walkthrough
Five independently runnable components:
| Component | What it does | Key tech |
|---|---|---|
rover/ |
Embedded rover: navigation, perception, streaming | Python, picamera2, websockets |
dashboard/ |
Operator mission control UI | Vanilla HTML/CSS/JS |
agents/ |
Multi-agent triage system | Fetch.ai uAgents, ASI-1 Mini |
reconstruction/ |
3D Gaussian Splat pipeline | MASt3R, InstantSplat, PyTorch |
overlay/ |
Standalone pose inference overlay | YOLOv8n-pose, FastAPI |
| Sponsor | Integration | Where |
|---|---|---|
| Fetch.ai uAgents | Four-agent chain: Scout detects, Triage assesses, Comms voices, Mapping reconstructs | agents/riis_agents.py |
| ASI-1 Mini | LLM generates SBAR triage report from detection event | agents/riis_agents.py:138 |
| ElevenLabs | Multilingual survivor contact audio (Spanish demo, EN/ES/ZH supported) | rover/rover/audio.py, overlay/pose_server.py |
| Gemma 2 | Offline triage field extraction for dashboard data | dashboard/gemma_triage.py |
cd rover
pip install -e .
python -m rover # synthetic frames + mock hardware off-PiThe rover auto-detects the Pi camera (picamera2) and PiCar-X hardware. Both fall back to software mocks if unavailable, so the full streaming pipeline works on any machine.
To archive frames for reconstruction, set in rover/config/default.yaml:
perception:
frame_archive_dir: "/data/riis/frames"
archive_every_n: 3 # saves ~5 fps at 15 fps captureOr override at runtime:
RIIS__PERCEPTION__FRAME_ARCHIVE_DIR=/data/riis/frames python -m rovercd dashboard
# Open index.html in Chrome — no build step, no server required
# Click Start (or press Space)The dashboard auto-probes ws://rover.local:8765/stream for a live rover. If unreachable, it plays the archived assets/rover_pi_cam.mp4 instead. The Agent Flow panel similarly tries ws://localhost:8768/agents, and falls back to built-in choreography if the bridge is not running.
cd agents
pip install uagents openai colorama
python bridge.py # agent bureau + WebSocket bridge on :8768
ASI_API_KEY=sk-... python bridge.py # with live ASI-1 triage generationThe bridge waits for the dashboard client to connect before firing the agent chain, keeping the animation synchronized with the operator clicking Start.
cd overlay
pip install ultralytics fastapi uvicorn opencv-python requests
python pose_server.py --rover-stream http://192.168.1.100:8000/stream.mjpg --port 8080
# Then open http://localhost:8080Falls back to webcam automatically if the rover stream is unreachable.
# One-time setup (run in Colab or on a Linux GPU box)
python reconstruction/setup_colab.py --work-dir /content/work
# Run the pipeline against archived rover frames
python reconstruction/run_pipeline.py \
--frames /data/riis/frames \
--output /data/riis/output \
--instantsplat /content/work/instantsplat \
--iters 7000 \
--target-gaussians 400000Outputs scene_quest.ply (≤400k Gaussians, Quest 2 ready), scene_full.ply (full desktop quality), and scene_flythrough.mp4.
Quest 2 deployment:
- Download
scene_quest.plyfrom the output directory - Unity 2022.3+: install the aras-p/UnityGaussianSplatting plugin
- Drag
scene_quest.plyinto Unity → assignGaussianSplatRenderercomponent → build for Android targeting Quest 2
| Component | Requirement |
|---|---|
| Rover | Raspberry Pi 4 + SunFounder PiCar-X, or any Linux/Mac for mock mode |
| Dashboard | Chrome / Firefox (WebSocket + WebGL canvas) |
| Agents | Python 3.10+, uagents, optional ASI_API_KEY |
| Overlay | Python 3.10+, CUDA optional (CPU inference ~3 fps) |
| Reconstruction | CUDA GPU with ≥8 GiB VRAM (Google Colab L4/A100 recommended) |
RIIS/
├── rover/ # Raspberry Pi package (python -m rover)
│ ├── rover/
│ │ ├── signals.py # Thread-safe EventBus (pub/sub)
│ │ ├── hardware.py # PiCar-X HAL + mock
│ │ ├── navigation.py # Reactive obstacle-avoidance loop
│ │ ├── telemetry.py # Battery, WiFi, FPS metrics
│ │ ├── audio.py # ElevenLabs audio playback
│ │ ├── streaming.py # WebSocket frame + telemetry server
│ │ ├── perception.py # Camera capture + frame archiving
│ │ ├── config.py # YAML config + env overrides
│ │ └── runtime.py # Top-level wiring + entry point
│ ├── config/default.yaml
│ ├── docs/PROTOCOL.md # WebSocket message contract
│ └── tests/
│
├── dashboard/ # Operator UI (open index.html)
│ ├── index.html
│ ├── app.js # Main RAF loop + phase transitions
│ ├── style.css
│ ├── config.js # Source selection + rover URL
│ ├── agents/
│ │ └── AgentFlowPanel.js # Live bridge + fallback choreography
│ ├── sources/
│ │ ├── VideoSourceManager.js
│ │ ├── LiveStreamSource.js
│ │ └── RecordedVideoSource.js
│ └── data/ # Pre-baked JSON (YOLO, transcript, triage, pipeline)
│
├── agents/ # Fetch.ai uAgents
│ ├── riis_agents.py # All four agents + message models
│ └── bridge.py # Bureau orchestrator + WebSocket relay
│
├── overlay/ # Standalone YOLO inference server
│ ├── pose_server.py # FastAPI + PoseWorker + Recorder
│ └── index.html # Overlay UI
│
└── reconstruction/ # 3D Gaussian Splat pipeline
├── setup_colab.py # One-time GPU environment setup
├── run_pipeline.py # MASt3R → InstantSplat → Quest 2 PLY
└── requirements.txt
Phase 1–3 (rover searching):
- Rover streams JPEG frames over WebSocket to the dashboard
- Dashboard's
VideoSourceManagerreceives frames → draws to canvas prebake_yolo.pyYOLO detections are replayed in sync → bounding box + skeleton overlay- Fetch.ai Scout agent fires a
DetectionEvent→ Triage calls ASI-1 Mini → SBAR report - Agent Flow panel animates the message chain live (or via fallback choreography)
- Triage → Comms: ElevenLabs audio plays at the rover
Phase 4 (reconstruction):
- Dashboard animates the 3D reconstruction pipeline progress
- Off-screen:
run_pipeline.pyruns MASt3R pose estimation + InstantSplat 3DGS training on GPU - Importance-weighted pruning keeps ≤400k Gaussians for Quest 2 budget
Phase 5 (VR ready):
- Dashboard shows "Scene ready — awaiting incident commander"
- Incident commander loads
scene_quest.plyin Unity on Quest 2 for a VR walkthrough
MIT