Gemini 3 Global Hackathon Entry
RadarSense transforms raw LiDAR point cloud data into meaningful behavioral insights using Gemini 3's advanced reasoning capabilities. Unlike traditional motion detection systems that simply trigger on movement, RadarSense understands what is happening and why.
"Gemini 3 is not used to detect events, but to reason over events that emerge from LiDAR-based spatial interactions."
Traditional algorithms can detect motion. Gemini 3 understands intent.
┌─────────────────────────────────────────────────────────────┐
│ LiDAR Point Cloud │
│ (55Hz, ~300 points/frame) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ Low-Level Processing (C++ Backend) │
│ • Tracking • Motion Analysis • Zone Detection │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ Event Abstraction Layer │
│ ENTER_ZONE | EXIT_ZONE | DWELL | PAUSE | CONVERGE │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ ★ GEMINI 3 REASONING ENGINE ★ │
│ │
│ • Temporal Pattern Analysis │
│ • Multi-Target Relationship Inference │
│ • Intent Recognition │
│ • Anomaly Detection │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ Behavioral Insights │
│ "Person A was pacing while waiting for Person B, │
│ then they left together - likely companions meeting" │
└─────────────────────────────────────────────────────────────┘
| Traditional System | RadarSense + Gemini 3 |
|---|---|
| "Motion detected" | "Hesitant approach - person appears uncertain" |
| "2 people in zone" | "Meeting scenario - Person A waited, Person B arrived, they left together as companions" |
| "Movement stopped" | "Surveillance behavior - repeated entry with prolonged observation" |
RadarSense converts raw sensor data into discrete event tokens:
Event Sequence (45 seconds):
[ENTER_ZONE t=0] → [DWELL t=5, 12s] → [PAUSE t=17] →
[ENTER_ZONE t=20, target=B] → [CONVERGE t=25] → [SYNC_MOVE t=28] → [EXIT_ZONE t=32]
Gemini 3 analyzes this sequence and produces:
{
"behaviorPattern": "meeting",
"socialRelation": "companions",
"intentInference": "Pre-arranged meeting - Person A arrived early and waited",
"narrativeSummary": "Person A paced anxiously while waiting, Person B arrived and they left together",
"anomalyScore": 0.1,
"confidence": 0.87
}- Behavior Pattern Recognition: confident_entry, hesitant_approach, exploring, loitering, meeting, following, coordinated, handoff
- Social Relationship Inference: strangers, acquaintances, companions
- Intent Recognition: What are they trying to do?
- Anomaly Detection: Is this behavior normal or suspicious?
- Narrative Generation: Human-readable scene descriptions
- 3D point cloud rendering (Three.js)
- Interactive detection zone editor
- Live event timeline
- AI insights panel with confidence scores
- Google Gemini 3 (Official API)
- OpenAI-compatible APIs (Custom endpoints)
Test the AI reasoning without real hardware:
- Walking scenario
- Loitering behavior
- Meeting scenario (two people)
- Suspicious activity
- Node.js 18+
- (Optional) Unitree L2 LiDAR for real data
# Clone the repository
git clone https://github.com/your-repo/radarsense.git
cd radarsense/frontend
# Install dependencies
npm install
# Start development server
npm run dev- Open the app in your browser
- Click the settings icon in the AI panel
- Enter your Gemini API key
- Select model (gemini-3-flash-preview recommended)
- Test connection
- Enable "Simulation Mode" toggle
- Select a scenario (e.g., "Meeting")
- Enable "AI Analysis"
- Watch Gemini 3 reason about the simulated behavior
frontend/
├── src/
│ ├── components/
│ │ ├── PointCloudViewer.tsx # 3D visualization
│ │ ├── GeminiInsights.tsx # AI insights panel
│ │ └── GeminiSettings.tsx # AI configuration
│ ├── services/
│ │ ├── aiService.ts # Multi-provider AI service
│ │ └── eventTokenizer.ts # Event abstraction layer
│ ├── hooks/
│ │ ├── useGeminiInsights.ts # AI integration hook
│ │ └── useMockData.ts # Simulation scenarios
│ └── types/
│ └── eventTokens.ts # Event & insight types
backend/ # C++ LiDAR server (optional)
├── src/
│ ├── lidar_server.cpp # Hardware interface
│ ├── websocket_handler.cpp # Real-time streaming
│ └── detection_region.cpp # Zone detection
RadarSense uses carefully crafted prompts to maximize Gemini 3's reasoning capabilities:
REASONING TASKS (Think step by step):
1. TEMPORAL PATTERN ANALYSIS:
- What is the rhythm of the events?
- Are there pauses that suggest decision-making?
2. MULTI-TARGET RELATIONSHIP:
- Did they enter together or separately?
- Are they moving in sync (companions) or independently?
3. INTENT INFERENCE:
- What might this person/group be trying to accomplish?
- What would a security professional notice?
4. WHAT TRADITIONAL ALGORITHMS MISS:
- A simple motion detector sees "movement" - what do YOU see?
Two people meeting: Person A arrives first and paces while waiting. Person B enters from another direction. They converge and leave together.
Gemini 3 Output:
"Pre-arranged meeting between companions. Person A exhibited waiting behavior (pacing pattern) for approximately 15 seconds before Person B arrived. Synchronized departure suggests familiarity."
Repeated entry and exit with prolonged observation periods.
Gemini 3 Output:
"Surveillance behavior detected. Subject entered zone 3 times within 2 minutes, each time pausing to observe before exiting. Anomaly score: 0.78"
# Build for production
npm run build
# Output in dist/ folder
# Deploy to any static hosting (Vercel, Netlify, AWS S3, etc.)Gemini 3's multimodal reasoning capabilities are perfect for this application:
- Temporal Understanding: Analyzes event sequences over time, not just snapshots
- Relationship Inference: Understands social dynamics between multiple targets
- Intent Recognition: Goes beyond "what" to understand "why"
- Natural Language Output: Generates human-readable insights
MIT
- Google Gemini 3 API
- Unitree Robotics (L2 LiDAR)
- Three.js
- React