Skip to content

EuanTop/RadarSense

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RadarSense: LiDAR-driven Event Reasoning System

Gemini 3 Global Hackathon Entry

RadarSense transforms raw LiDAR point cloud data into meaningful behavioral insights using Gemini 3's advanced reasoning capabilities. Unlike traditional motion detection systems that simply trigger on movement, RadarSense understands what is happening and why.

The Vision

"Gemini 3 is not used to detect events, but to reason over events that emerge from LiDAR-based spatial interactions."

Traditional algorithms can detect motion. Gemini 3 understands intent.

How It Works

Three-Layer Architecture

┌─────────────────────────────────────────────────────────────┐
│                    LiDAR Point Cloud                         │
│                    (55Hz, ~300 points/frame)                 │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│              Low-Level Processing (C++ Backend)              │
│         • Tracking • Motion Analysis • Zone Detection        │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│                Event Abstraction Layer                       │
│    ENTER_ZONE | EXIT_ZONE | DWELL | PAUSE | CONVERGE        │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│              ★ GEMINI 3 REASONING ENGINE ★                   │
│                                                              │
│  • Temporal Pattern Analysis                                 │
│  • Multi-Target Relationship Inference                       │
│  • Intent Recognition                                        │
│  • Anomaly Detection                                         │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│                   Behavioral Insights                        │
│  "Person A was pacing while waiting for Person B,           │
│   then they left together - likely companions meeting"       │
└─────────────────────────────────────────────────────────────┘

What Makes This Special

Beyond Simple Detection

Traditional System RadarSense + Gemini 3
"Motion detected" "Hesitant approach - person appears uncertain"
"2 people in zone" "Meeting scenario - Person A waited, Person B arrived, they left together as companions"
"Movement stopped" "Surveillance behavior - repeated entry with prolonged observation"

Event Tokens → Semantic Understanding

RadarSense converts raw sensor data into discrete event tokens:

Event Sequence (45 seconds):
[ENTER_ZONE t=0] → [DWELL t=5, 12s] → [PAUSE t=17] →
[ENTER_ZONE t=20, target=B] → [CONVERGE t=25] → [SYNC_MOVE t=28] → [EXIT_ZONE t=32]

Gemini 3 analyzes this sequence and produces:

{
  "behaviorPattern": "meeting",
  "socialRelation": "companions",
  "intentInference": "Pre-arranged meeting - Person A arrived early and waited",
  "narrativeSummary": "Person A paced anxiously while waiting, Person B arrived and they left together",
  "anomalyScore": 0.1,
  "confidence": 0.87
}

Features

AI-Powered Analysis

  • Behavior Pattern Recognition: confident_entry, hesitant_approach, exploring, loitering, meeting, following, coordinated, handoff
  • Social Relationship Inference: strangers, acquaintances, companions
  • Intent Recognition: What are they trying to do?
  • Anomaly Detection: Is this behavior normal or suspicious?
  • Narrative Generation: Human-readable scene descriptions

Real-Time Visualization

  • 3D point cloud rendering (Three.js)
  • Interactive detection zone editor
  • Live event timeline
  • AI insights panel with confidence scores

Multi-Provider AI Support

  • Google Gemini 3 (Official API)
  • OpenAI-compatible APIs (Custom endpoints)

Simulation Mode

Test the AI reasoning without real hardware:

  • Walking scenario
  • Loitering behavior
  • Meeting scenario (two people)
  • Suspicious activity

Quick Start

Prerequisites

  • Node.js 18+
  • (Optional) Unitree L2 LiDAR for real data

Installation

# Clone the repository
git clone https://github.com/your-repo/radarsense.git
cd radarsense/frontend

# Install dependencies
npm install

# Start development server
npm run dev

Configure AI

  1. Open the app in your browser
  2. Click the settings icon in the AI panel
  3. Enter your Gemini API key
  4. Select model (gemini-3-flash-preview recommended)
  5. Test connection

Try Simulation Mode

  1. Enable "Simulation Mode" toggle
  2. Select a scenario (e.g., "Meeting")
  3. Enable "AI Analysis"
  4. Watch Gemini 3 reason about the simulated behavior

Technical Architecture

frontend/
├── src/
│   ├── components/
│   │   ├── PointCloudViewer.tsx    # 3D visualization
│   │   ├── GeminiInsights.tsx      # AI insights panel
│   │   └── GeminiSettings.tsx      # AI configuration
│   ├── services/
│   │   ├── aiService.ts            # Multi-provider AI service
│   │   └── eventTokenizer.ts       # Event abstraction layer
│   ├── hooks/
│   │   ├── useGeminiInsights.ts    # AI integration hook
│   │   └── useMockData.ts          # Simulation scenarios
│   └── types/
│       └── eventTokens.ts          # Event & insight types

backend/                            # C++ LiDAR server (optional)
├── src/
│   ├── lidar_server.cpp            # Hardware interface
│   ├── websocket_handler.cpp       # Real-time streaming
│   └── detection_region.cpp        # Zone detection

The Prompt Engineering

RadarSense uses carefully crafted prompts to maximize Gemini 3's reasoning capabilities:

REASONING TASKS (Think step by step):

1. TEMPORAL PATTERN ANALYSIS:
   - What is the rhythm of the events?
   - Are there pauses that suggest decision-making?

2. MULTI-TARGET RELATIONSHIP:
   - Did they enter together or separately?
   - Are they moving in sync (companions) or independently?

3. INTENT INFERENCE:
   - What might this person/group be trying to accomplish?
   - What would a security professional notice?

4. WHAT TRADITIONAL ALGORITHMS MISS:
   - A simple motion detector sees "movement" - what do YOU see?

Demo Scenarios

Meeting Scenario

Two people meeting: Person A arrives first and paces while waiting. Person B enters from another direction. They converge and leave together.

Gemini 3 Output:

"Pre-arranged meeting between companions. Person A exhibited waiting behavior (pacing pattern) for approximately 15 seconds before Person B arrived. Synchronized departure suggests familiarity."

Surveillance Scenario

Repeated entry and exit with prolonged observation periods.

Gemini 3 Output:

"Surveillance behavior detected. Subject entered zone 3 times within 2 minutes, each time pausing to observe before exiting. Anomaly score: 0.78"

Deployment

# Build for production
npm run build

# Output in dist/ folder
# Deploy to any static hosting (Vercel, Netlify, AWS S3, etc.)

Why Gemini 3?

Gemini 3's multimodal reasoning capabilities are perfect for this application:

  1. Temporal Understanding: Analyzes event sequences over time, not just snapshots
  2. Relationship Inference: Understands social dynamics between multiple targets
  3. Intent Recognition: Goes beyond "what" to understand "why"
  4. Natural Language Output: Generates human-readable insights

License

MIT

Acknowledgments

  • Google Gemini 3 API
  • Unitree Robotics (L2 LiDAR)
  • Three.js
  • React

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors