Skip to content

WHY-RERO/EYECORE_V2

Repository files navigation

Driver Analysis System

GitHub YouTube Instagram Website Website Website

Python OpenCV DeepFace MediaPipe Flask License

Multi-modal, real-time driver safety and risk analysis system.

DeepFace emotion analysis · MediaPipe eye tracking · NMEA GPS · OBD-II/CAN vehicle data · OpenStreetMap speed limits

Simulation mode for development (PC — real AI + virtual sensors) · Vehicle mode for production (all hardware real)


Table of Contents


Features

Layer Technology Details
Emotion Analysis DeepFace (RetinaFace backend) 7 emotion classes, EMA smoothing, CLAHE preprocessing
Eye Tracking MediaPipe Face Mesh EAR calculation, microsleep (1s), drowsiness (2.5s), head pose estimation
Risk Scoring EMA + decay + sustained boost Speed-dependent dynamic emotion/eye weights
Speed Management OpenStreetMap Overpass API Real road speed limit, 120s cache, TR speed tags
Hazard Zones OSM Overpass 300m radius, speed camera / accident point / junction alerts
Voice Alerts pyttsx3 TTS Priority queue, cooldown, Turkish speech synthesis
Video Recording OpenCV VideoWriter Ring buffer — 10s pre-event + 10s post-event → MP4
Driver Profile JSON-based learning profile Session history, personal risk bonus, anger multiplier
Web Dashboard Flask + SocketIO + Chart.js Real-time 4-chart dashboard, REST API
Vehicle Data C++ DLL → OBD-II / CAN Bus RPM, throttle, brake, gear, speed
UI Options OpenCV / PyQt5 / Web Three independent render modes

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                         CAMERA THREAD                           │
│  VideoCapture → ring buffer → _latest_frame (lock protected)    │
└───────────────────────────┬─────────────────────────────────────┘
                            │  frame.copy()
                            ▼
┌─────────────────────────────────────────────────────────────────┐
│              AI ANALYSIS THREAD (ThreadPoolExecutor)            │
│                                                                 │
│  ┌─────────────────────────┐   ┌──────────────────────────┐    │
│  │ EmotionDetector          │   │ EyeTracker               │    │
│  │  CLAHE preprocess        │   │  MediaPipe Face Mesh     │    │
│  │  DeepFace.analyze()      │   │  EAR (6-landmark)        │    │
│  │  Face ratio check        │   │  Head pose (yaw/pitch)   │    │
│  │  EMA smoothing α=0.55    │   │  Drowsy / Microsleep     │    │
│  └────────────┬────────────┘   └────────────┬─────────────┘    │
│               └──────────────┬──────────────┘                   │
└──────────────────────────────┼──────────────────────────────────┘
                               │  FrameResult + EyeState
                               ▼
┌─────────────────────────────────────────────────────────────────┐
│                         MAIN LOOP (main thread)                 │
│                                                                 │
│  GPSSimulator/GPSMonitor ──┐                                    │
│  VehicleSim/VehicleBridge ─┤                                    │
│                            ▼                                    │
│                    ┌───────────────┐                            │
│                    │  RiskScorer   │  EMA α=0.25                │
│                    │  decay=0.06   │  sustained boost           │
│                    │  dyn weights  │  speed-based ew/gw         │
│                    └──────┬────────┘                            │
│                           │  RiskSnapshot                       │
│                           ▼                                     │
│              ┌────────────────────────┐                         │
│              │  DriverProfileManager  │  personal bonus         │
│              └────────────┬───────────┘                         │
│                           │  boosted_score                      │
│                           ▼                                     │
│   ┌────────────┐  ┌───────────────┐  ┌─────────────────────┐   │
│   │SpeedAdvisor│  │HazardZoneMon. │  │  VideoRecorder      │   │
│   │OSM Overpass│  │OSM 300m radius│  │  ring buf → MP4     │   │
│   └─────┬──────┘  └───────┬───────┘  └─────────────────────┘   │
│         └────────┬─────────┘                                    │
│                  ▼                                              │
│         ┌────────────────┐                                      │
│         │  VoiceAlert    │  priority queue TTS                  │
│         └────────────────┘                                      │
│                  ▼                                              │
│         ┌────────────────┐   ┌───────────┐   ┌──────────────┐  │
│         │  push_update() │   │ qt_dash   │   │ draw_overlay │  │
│         │  Flask/Socket  │   │ PyQt5     │   │ OpenCV frame │  │
│         └────────────────┘   └───────────┘   └──────────────┘  │
└─────────────────────────────────────────────────────────────────┘

Simulation vs Vehicle Mode

Component simulation/ vehicle/
Camera Real webcam Real webcam / dashcam
Emotion (DeepFace) Real Real
Eye (MediaPipe) Real Real
GPS Python simulator — Istanbul Kadikoy route NMEA serial port (pynmea2)
Vehicle data Python physics engine C++ DLL → OBD-II / CAN
Voice alerts Prints to console pyttsx3 TTS
Default UI OpenCV window PyQt5 dashboard

Directory Structure

driver-analysis/
├── shared/                     # Imported by both modes
│   ├── models.py               # FrameResult, EyeState, GPSFix, VehicleData,
│   │                           # RiskSnapshot, AdvisoryEvent, LiveSnapshot
│   ├── risk_scorer.py          # EMA + decay + sustained boost risk engine
│   ├── driver_profile.py       # Learning driver profile (JSON ↔ DriverStats)
│   ├── speed_enforcer.py       # OSM speed limit + 4-action advisor
│   ├── speed_limit_api.py      # Overpass API client (cached)
│   ├── hazard_zones.py         # 300m hazard point detector
│   ├── voice_alert.py          # Priority queue TTS (pyttsx3 / console)
│   ├── video_recorder.py       # Ring buffer event recording → MP4
│   └── overlay.py              # OpenCV frame annotation
│
├── simulation/                 # PC development & test mode
│   ├── main.py                 # Simulation entry point
│   └── sensors/
│       ├── gps_sim.py          # Istanbul Kadikoy waypoint simulator
│       └── vehicle_sim.py      # Python physics engine (speed/RPM/gear)
│
├── vehicle/                    # Vehicle production mode
│   ├── main.py                 # Vehicle entry point
│   ├── sensors/
│   │   ├── emotion_detector.py # DeepFace + CLAHE + EMA
│   │   ├── eye_tracker.py      # MediaPipe EAR + head pose
│   │   ├── gps_monitor.py      # NMEA serial port reader
│   │   └── vehicle_bridge.py   # C++ DLL Python wrapper
│   └── bridge/
│       ├── vehicle_bridge.cpp  # OBD-II / CAN reader (C++)
│       ├── vehicle_bridge.h
│       └── derleme.bat         # Windows build script
│
├── ui/
│   ├── web_server.py           # Flask + SocketIO dashboard + REST API
│   └── qt_dashboard.py         # PyQt5 in-vehicle display
│
├── profiles/                   # Driver JSON profiles (runtime)
├── recordings/                 # Incident videos (runtime)
│
├── requirements.txt
├── requirements_sim.txt
├── requirements_vehicle.txt
└── LICENSE

Installation

Simulation Mode (PC development & testing)

git clone https://github.com/WHY-RERO/EYECORE_V2.git
cd EYECORE_V2
pip install -r requirements_sim.txt

MediaPipe warning: mediapipe >= 0.10.14 drops the solutions API.
requirements_sim.txt pins mediapipe==0.10.9. Do not change this.

TensorFlow / DeepFace: If your CPU has no AVX support (old or ARM CPU):

pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu

DeepFace automatically falls back to the PyTorch backend when TensorFlow is unavailable.

Vehicle Mode

pip install -r requirements_vehicle.txt

Compile the C++ OBD-II bridge before running (see C++ Bridge).


Usage

Simulation

# Default — OpenCV window, camera 0
python -m simulation.main

# PyQt5 in-vehicle display
python -m simulation.main --ui qt

# Web dashboard only (http://localhost:5000)
python -m simulation.main --ui web

# Custom driver + camera
python -m simulation.main --driver-id john --name "John Doe" --camera 1

# Higher analysis frequency (not recommended on low-end CPUs)
python -m simulation.main --interval 0.2

# Custom web port
python -m simulation.main --web-port 8080

# View driver profile report
python -m simulation.main --profile john

# List all saved drivers
python -m simulation.main --list

Vehicle Mode

python -m vehicle.main --driver-id john --name "John Doe"

# Specify GPS + OBD ports
python -m vehicle.main --driver-id john --gps-port COM4 --obd-port COM5

# PyQt5 UI (vehicle mode default)
python -m vehicle.main --driver-id john --ui qt

# OpenCV window with overlay
python -m vehicle.main --driver-id john --ui cv

CLI Arguments

Argument Simulation Vehicle Default Description
--driver-id sim_user / driver1 Profile key
--name "" Display name
--camera 0 OpenCV camera index
--interval 0.35 AI analysis period (sec)
--web-port 5000 Dashboard HTTP port
--ui cv / qt cv | qt | web
--gps-port COM3 NMEA serial port
--obd-port auto OBD-II serial port
--profile Print profile report
--list List saved drivers

Algorithms

Risk Scoring

The system applies the following steps each analysis cycle:

1. Emotion Score

Weighted average across all emotion classes:

emotion_score = Σ (emotion_pct[i] / total) × WEIGHT[i]
Emotion Weight
angry 1.00
fear 0.90
disgust 0.85
sad 0.65
surprise 0.45
neutral 0.05
happy 0.00

2. Dynamic Weights (Speed-Dependent)

speed < 30 km/h  →  emotion 75%, eye 25%
30–70 km/h       →  linear interpolation
70–100 km/h      →  linear interpolation
speed > 100 km/h →  emotion 30%, eye 70%

At higher speeds eye tracking becomes the dominant safety signal.

3. EMA + Decay + Sustained Boost

raw    = clamp(ew × emotion_score + gw × eye_score, 0, 1)
ema    = 0.25 × raw + 0.75 × ema             # smoothing
decayed = max(ema, decayed - 0.06 × dt)      # slow descent
sustained_boost = min(0.25, (sust_sec / 15.0) × 0.25)
final  = clamp(decayed + sustained_boost, 0, 1)

4. Risk Levels

Level Threshold Action
LOW < 0.35
MEDIUM 0.35 – 0.58 advisory (if speeding)
HIGH 0.58 – 0.75 warning
CRITICAL ≥ 0.75 critical + video recording

Emotion Detection Pipeline

Raw frame
    │
    ▼
CLAHE (LAB color space, clipLimit=2.5)
    │   Low-light and glare correction
    ▼
DeepFace.analyze(enforce_detection=False)
    │
    ├── face_ratio = (w×h) / (img_w×img_h)
    │   face_ratio < 0.65 → face valid
    │
    ├── max_score >= 5 → process
    │
    └── EMA smoothing:
        ema[emo] = 0.55 × raw[emo] + 0.45 × ema[emo]

If no face is detected, the last valid result fades out over 2 seconds (fade = 1 - age/2.0).

Eye Tracking (EAR)

Eye Aspect Ratio computed from 6 MediaPipe landmarks:

EAR = (‖p2−p6‖ + ‖p3−p5‖) / (2 × ‖p1−p4‖)
State Condition
Closed EAR < 0.20
Microsleep 1.0s ≤ closed_duration < 2.5s
Drowsy closed_duration ≥ 2.5s
Distracted |yaw| > 25° or |pitch| > 20°

Eye score formula:

eye_score = ear_score × 0.50 + drowsy_score × 0.35 + distracted × 0.15

Driver Profile Learning

Saved to JSON every 300 frames. Analysis over a sliding window of 30 frames:

anger_ratio > 0.30  → reduce speed limit, anger_multiplier += 0.05
fatigue_ratio > 0.30 → suggest break, −20 km/h
avg_risk > personal_stress_threshold → −10 km/h

personal_bonus = min(0.20, anger_ratio × 0.5)
final_risk = combined_score + personal_bonus

Speed Limit Detection

Retrieved from the OSM Overpass API via maxspeed tag:

  1. Coordinates cached at ±0.001° precision (120s TTL)
  2. If maxspeed is missing, road-type defaults are used (motorway→120, residential→50, etc.)
  3. Turkish country tags supported: tr:urban, tr:rural, tr:motorway
  4. Automatic mph → km/h conversion

API & Dashboard

Web dashboard runs at http://localhost:5000.

REST Endpoints

Endpoint Method Description
/ GET Live HTML dashboard
/api/status GET Current LiveSnapshot (JSON)
/api/history GET Last 300 snapshots (JSON array)
/api/health GET {"status": "ok", "timestamp": ...}

SocketIO Events

Event Direction Payload
update server → client LiveSnapshot (dataclass → dict)

LiveSnapshot Schema

{
  "timestamp": 1712345678.123,
  "driver_id": "john",
  "driver_name": "John Doe",
  "risk_score": 0.621,
  "risk_level": "HIGH",
  "emotion": "angry",
  "emotion_score": 0.784,
  "eye_score": 0.312,
  "speed_kmh": 87.4,
  "road_limit": 70,
  "sustained_sec": 12.3,
  "action": "warning",
  "message": "WARNING: High risk! Reduce speed to 70 km/h",
  "road_name": "E-5",
  "drowsy": false,
  "microsleep": false,
  "distracted": true,
  "lat": 40.9945,
  "lon": 29.0430,
  "obd_rpm": 3200,
  "obd_throttle": 68.5,
  "obd_gear": 4,
  "obd_brake": 0.0,
  "mode": "vehicle"
}

Hardware Setup

GPS (Vehicle Mode)

Any GPS module using the NMEA 0183 protocol is supported.

GPS module TX → USB-UART adapter → COM port → pynmea2

Supported NMEA sentences: $GPRMC (position + speed), $GPGGA (altitude + fix quality)

# Test port
python -c "import serial; s=serial.Serial('COM3',9600); print(s.readline())"

OBD-II (Vehicle Mode)

Works with any ELM327-based USB or Bluetooth OBD adapter. Queries are handled by the C++ DLL via vehicle_bridge.cpp.

OBD adapter ← → ELM327 ← → ECU (CAN/K-Line)
      ↕
COM port (virtual USB serial)
      ↕
vehicle_bridge.dll (C++)
      ↕
VehicleBridge (Python ctypes wrapper)

C++ Bridge

Build

Windows:

vehicle\bridge\derleme.bat

derleme.bat contents:

gcc -shared -fPIC -o vehicle_bridge.dll vehicle/bridge/vehicle_bridge.cpp -lm

Linux:

gcc -shared -fPIC -o vehicle_bridge.so vehicle/bridge/vehicle_bridge.cpp -lm

The DLL is output to vehicle/bridge/. Vehicle mode will not start without it.

DLL Interface

vehicle_bridge.cpp exports the following functions:

int  vb_connect(const char* port);
void vb_disconnect();
int  vb_read(VehicleState* out);   // speed_kmh, rpm, throttle, brake, gear

The Python side binds via ctypes. On failure, vehicle mode exits with a fatal error.


Configuration

Emotion Sensitivity

vehicle/sensors/emotion_detector.py

_EMOTION_EMA_ALPHA = 0.55   # Higher → faster emotion response (0.2–0.8)
_STALE_TTL        = 2.0     # Seconds to hold last valid result when no face
_MAX_FACE_RATIO   = 0.65    # Max face/frame area ratio

The max_score >= 5 threshold: if DeepFace's highest emotion score is below this, the face is considered invalid.

shared/risk_scorer.py

EMA_ALPHA  = 0.25   # Risk smoothing (higher → more reactive but noisier)
DECAY_RATE = 0.06   # Score drop rate per second after emotion resolves

LEVEL_MEDIUM   = 0.35
LEVEL_HIGH     = 0.58
LEVEL_CRITICAL = 0.75

Voice Alert Cooldowns

shared/voice_alert.py

COOLDOWNS = {
    "MEDIUM": 10.0, "HIGH": 8.0, "CRITICAL": 4.0,
    "microsleep": 5.0, "drowsy": 10.0, "distracted": 8.0,
    "angry": 12.0, "stressed": 15.0, "fatigue": 15.0,
}

Analysis Frequency

The --interval argument controls how often the AI thread is triggered:

Value Equivalent FPS CPU Load
0.5 ~2 fps Low
0.35 ~3 fps Moderate (default)
0.2 ~5 fps High
0.1 ~10 fps Very high (GPU recommended)

The camera thread always reads at 30 fps — --interval only affects AI analysis frequency. The UI never freezes regardless of this value.

Video Recording

shared/video_recorder.py

PRE_EVENT_SEC  = 10   # Ring buffer duration before trigger
POST_EVENT_SEC = 10   # Duration recorded after trigger
FPS            = 15

Triggered on HIGH or CRITICAL risk. Files are saved to recordings/ as incident_YYYY-MM-DD_HH-MM-SS_LEVEL.mp4.


Data Flow Summary

Camera ──────────────────────────────────────────────────┐
                                                         ▼
                                            EmotionDetector (DeepFace)
GPS (sim/real) ──────────────────────────────────────────┤
                                                         ▼
Vehicle (sim/C++ bridge) ────────────────► RiskScorer (EMA + decay)
                                                         │
                                            DriverProfile (bonus)
                                                         │
                                            SpeedAdvisor (OSM limit)
                                                         │
                              ┌──────────────────────────┼──────────┐
                              ▼                          ▼          ▼
                         VoiceAlert               VideoRecorder    UI
                         (TTS queue)              (ring buffer)   (web/qt/cv)

Requirements

Software

  • Python 3.11+
  • mediapipe==0.10.9 (solutions API required)
  • deepface>=0.0.93
  • opencv-python>=4.8.0
  • Vehicle mode: GCC (for C++ bridge compilation)

Hardware (Vehicle Mode)

  • USB webcam or dashcam (OpenCV compatible)
  • NMEA GPS module (USB-UART or Bluetooth)
  • ELM327 OBD-II adapter (USB or Bluetooth)
  • Raspberry Pi 4 / x86 mini-PC (recommended: 4GB RAM+, quad-core)

Hardware (Simulation)

  • USB webcam
  • Modern CPU (for DeepFace; GPU optional)

Author

RERO

Software Developer · AI Researcher

Platform Link
GitHub @WHY-RERO
YouTube @why_reronuzzz
Instagram @why_reronuzzz
RERO AI reroai.com.tr
Axon Data Relations axondatarel.org
R1 r1.net.tr

This project was designed and built from scratch by RERO.


License

Copyright (c) 2024 RERO — All Rights Reserved.

This software and all associated source code, assets, and documentation
("Software") are the exclusive property of RERO.

The following actions are STRICTLY PROHIBITED without prior written
permission from the owner:

  - Copying, reproducing, or redistributing the Software or any part of it
  - Using the Software for any commercial or non-commercial purpose
  - Modifying, refactoring, or creating derivative works
  - Sublicensing, selling, renting, or transferring the Software
  - Integrating the Software into any other system, product, or service
  - Reverse engineering, decompiling, or disassembling the Software
  - Removing or altering this license notice or copyright attribution

THE SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND.
THE AUTHOR SHALL NOT BE LIABLE FOR ANY DAMAGES ARISING FROM ITS USE.

Contact for licensing or collaboration:
  GitHub : https://github.com/WHY-RERO
  Web    : https://reroai.com.tr

See LICENSE for the full license text.

About

Real-time AI-powered driver safety and risk analysis system using emotion detection, eye tracking, and vehicle data.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors