Skip to content
forked from suteny0r/ALPR

Automatic License Plate Recognition system with vehicle detection, tracking, and analytics. Built with Python, YOLOv8, and CUDA acceleration.

License

Notifications You must be signed in to change notification settings

inf0cyte/edge_alpr

 
 

Repository files navigation

ALPR System - License Plate Recognition with Vehicle Analytics

Python CUDA License Platform

A comprehensive Automatic License Plate Recognition (ALPR) system with vehicle attribute detection, tracking, and analytics. Built with Python and CUDA acceleration for real-time performance.


Table of Contents


Demo

ALPR Demo Real-time vehicle detection and license plate recognition with tracking

Sample Output:

Track 2: Plate=R183JF (0.52), Color=Black, Type=vehicle, Direction=Stationary, Dwell=4.3s

Features

License Plate Detection & Recognition - EasyOCR with GPU acceleration ✅ Vehicle Detection - YOLOv8 detects cars, trucks, buses, motorcycles ✅ Vehicle Color Detection - 8 colors (Red, Blue, Green, Yellow, White, Black, Gray, Silver) ✅ Vehicle Make/Model - Extensible framework (placeholder included) ✅ Object Tracking - DeepSORT maintains vehicle IDs across frames ✅ Dwell Time Calculation - Seconds each vehicle spends in view ✅ Direction of Travel - 8 directions (N, NE, E, SE, S, SW, W, NW) ✅ Blur Quality Assessment - Flags blurry plates and faces ✅ Privacy Protection - Optional face blurring ✅ Multiple Input Sources - Webcam, video files, or RTSP streams ✅ Complete Data Logging - JSON metadata + annotated images

Output Example:

Track 2: Plate=R183JF (0.52), Color=Black, Type=vehicle, Direction=Stationary, Dwell=4.3s

System Requirements

Hardware

  • NVIDIA GPU with CUDA support (compute capability 3.5+)
  • Minimum 8GB RAM
  • USB camera or IP camera (RTSP stream)

Software

  • Python 3.10 or 3.11
  • CUDA Toolkit 11.8 or higher
  • cuDNN 8.6 or higher

Quick Start (Windows)

The easiest way to get started on Windows:

# Run the automated setup script
quick_start.bat

This will:

  1. Create a virtual environment
  2. Install all dependencies (including CUDA-enabled PyTorch)
  3. Verify your setup
  4. Offer to start the application

Manual Installation

1. Check CUDA Version

nvidia-smi

Look for "CUDA Version: X.X" in the output.

2. Install Python

Using pyenv (recommended):

# Install Python 3.10.11 (best compatibility)
pyenv install 3.10.11
pyenv local 3.10.11

# Verify
python --version

3. Create Virtual Environment

# Create venv
python -m venv venv

# Activate it
# Windows:
venv\Scripts\activate

# Linux/Mac:
source venv/bin/activate

You should see (venv) in your prompt.

4. Install PyTorch with CUDA

Match your CUDA version from step 1:

# CUDA 11.8
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118

# CUDA 12.1
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121

# CUDA 12.4+ (or CUDA 13.0)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124

# CPU only (no GPU)
pip install torch torchvision

5. Install Other Dependencies

pip install -r requirements.txt

This will take several minutes (downloads ~2-3GB of packages).

6. Verify Installation

python setup.py

This checks:

  • ✓ Python version
  • ✓ CUDA availability
  • ✓ All dependencies installed
  • ✓ Downloads YOLOv8 models
  • ✓ Tests camera access

Configuration

Edit config.yaml to configure the system:

Camera Settings

camera:
  source: 0  # 0 for webcam, or "rtsp://user:pass@ip:port/stream"
  width: 1920
  height: 1080
  fps: 30
  name: "Camera_1"

CUDA Settings

cuda:
  enabled: true
  device: 0  # GPU device ID (0 for first GPU)

Detection Thresholds

alpr:
  confidence_threshold: 0.5  # Minimum confidence for plate reading
  min_plate_width: 60
  min_plate_height: 20

vehicle:
  confidence_threshold: 0.4  # Minimum confidence for vehicle detection

Usage

Running the System

# Make sure venv is activated
venv\Scripts\activate  # Windows
# source venv/bin/activate  # Linux/Mac

# Run the application
python main.py

The system will:

  1. ✓ Initialize CUDA and load models on GPU
  2. ✓ Open the camera/video source
  3. ✓ Start detecting vehicles and plates in real-time
  4. ✓ Display annotated video with overlays
  5. ✓ Save detections to detections/ folder

Keyboard Controls

  • q - Quit the application (when video window is active)

Input Sources

Webcam (Default):

# config.yaml
camera:
  source: 0  # 0 = first webcam, 1 = second, etc.

Video File:

camera:
  source: "D:\\alpr\\video.mp4"  # Absolute path
  # OR
  source: "traffic_video.mp4"  # Relative path

RTSP IP Camera:

camera:
  source: "rtsp://admin:password@192.168.1.100:554/stream"

Output

Console Output

Real-time logging shows detection events:

2025-10-09 21:33:48 | INFO | Using device: cuda:0
2025-10-09 21:33:48 | INFO | CUDA available: NVIDIA GeForce RTX 3090
2025-10-09 21:33:50 | SUCCESS | ALPR system initialized successfully
2025-10-09 21:33:50 | INFO | Starting ALPR system...
2025-10-09 21:33:56 | INFO | Track 2: Plate=R183JF (0.52), Color=Black, Type=vehicle, Direction=Stationary, Dwell=4.3s

Saved Files

The system automatically saves to the detections/ directory:

Images:

  • detection_20251009_213355_987612.jpg - Annotated frame with bounding boxes, plate text, colors, etc.

Metadata:

  • detection_20251009_213355_987612.json - Structured data for each detection

JSON Format Example

{
  "timestamp": "2025-10-09T21:33:55.987612",
  "detections": [
    {
      "track_id": "2",
      "bbox": [23.6, 78.4, 501.8, 449.3],
      "vehicle_type": "vehicle",
      "plate": {
        "text": "R183JF",
        "confidence": 0.52,
        "bbox": [74, 182, 145, 34]
      },
      "color": "Black",
      "make": "Unknown",
      "model": "Unknown",
      "dwell_time_seconds": 4.3,
      "direction": "Stationary",
      "plate_blur_score": 89.4,
      "plate_is_blurry": false
    }
  ]
}

Visual Output

The display window shows:

  • Green boxes - Vehicle detections
  • Track IDs - Persistent vehicle identifiers
  • Yellow text - License plate (if detected and sharp)
  • Orange text - Blurry plate warning
  • Vehicle info - Color, type, direction, dwell time
  • Statistics - Total detections and plate count

Project Structure

alpr/
├── main.py                 # Main application entry point
├── config.yaml            # Configuration file
├── requirements.txt       # Python dependencies
├── README.md             # This file
├── src/
│   ├── __init__.py
│   ├── camera_manager.py      # Camera capture and management
│   ├── alpr_detector.py       # License plate detection and OCR
│   ├── vehicle_detector.py    # Vehicle detection and attributes
│   ├── tracker.py            # Object tracking (DeepSORT)
│   ├── direction_analyzer.py  # Direction of travel analysis
│   ├── blur_detector.py      # Blur detection for quality control
│   └── visualizer.py         # Visualization and output
├── logs/                 # Application logs
└── detections/          # Saved detections (images + JSON)

Performance Optimization

GPU Memory Usage

If you encounter GPU memory issues, try:

  1. Use a smaller YOLO model:

    vehicle:
      model: "yolov8n.pt"  # Nano model (fastest, less accurate)
  2. Reduce camera resolution:

    camera:
      width: 1280
      height: 720

Processing Speed

Expected performance on RTX 3060:

  • ~30 FPS with YOLOv8n (nano)
  • ~20 FPS with YOLOv8m (medium)
  • ~10 FPS with YOLOv8l (large)

Configuration

All settings are in config.yaml:

Camera Settings

camera:
  source: 0  # Webcam index, video path, or RTSP URL
  width: 1920  # Requested resolution (may differ from actual)
  height: 1080
  fps: 30

Detection Thresholds

vehicle:
  confidence_threshold: 0.4  # Lower = more detections (+ false positives)
  model: "yolov8n.pt"  # n=fast, m=balanced, l=accurate

alpr:
  confidence_threshold: 0.5  # OCR confidence minimum
  min_plate_width: 60  # Minimum pixels (adjust for camera distance)
  min_plate_height: 20

Tracking Parameters

tracking:
  max_age: 30  # Frames to keep track without detection
  min_hits: 3  # Detections needed to confirm track
  iou_threshold: 0.3  # Matching strictness

Blur Detection

blur:
  face_threshold: 100  # Laplacian variance threshold
  plate_threshold: 50  # Lower = more permissive

Troubleshooting

CUDA Not Available

Check CUDA:

nvidia-smi
python -c "import torch; print(torch.cuda.is_available())"

Fix:

# Uninstall and reinstall PyTorch with matching CUDA version
pip uninstall torch torchvision
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124

Camera/Video Not Opening

Symptoms: Failed to open camera source

Solutions:

# Test camera
python -c "import cv2; cap = cv2.VideoCapture(0); print(cap.isOpened())"

# Try different indices
source: 1  # or 2, 3...

# For video files, use absolute path
source: "D:\\alpr\\video.mp4"

Poor OCR Accuracy

Solutions:

  1. Increase preprocessing scale in src/alpr_detector.py:120:

    scale_factor = 3  # From 2 to 3
  2. Adjust plate size minimums:

    alpr:
      min_plate_width: 80  # Increase from 60
      min_plate_height: 30  # Increase from 20
  3. Filter by blur score:

    blur:
      plate_threshold: 100  # Reject blurrier plates

Slow Performance / Low FPS

Use faster model:

vehicle:
  model: "yolov8n.pt"  # Fastest (30 FPS on RTX 3090)

Reduce resolution:

camera:
  width: 1280
  height: 720

Process every other frame (modify main.py:run()):

if frame_count % 2 == 0:
    results, tracks = self.process_frame(frame)

Tracks Switching IDs

Symptoms: Same vehicle gets different IDs

Solutions:

tracking:
  max_age: 60  # Increase from 30
  iou_threshold: 0.2  # Decrease from 0.3 (stricter)

Advanced Features

Make/Model Detection

To enable vehicle make/model detection, you'll need to integrate a pre-trained model:

  1. Download a vehicle make/model classifier (e.g., from CompCars dataset)
  2. Update src/vehicle_detector.py in the _detect_make_model() method
  3. Load and run inference with the model

Database Integration

Enable database storage in config.yaml:

database:
  enabled: true
  connection_string: "sqlite:///alpr.db"

Then implement the database schema and logging in a new src/database.py module.

Future Enhancements

  • Multi-camera support (2+ cameras)
  • Vehicle make/model detection with pre-trained model
  • Database integration for long-term storage
  • Web dashboard for monitoring
  • Real-time alerts and notifications
  • License plate region validation (US states, EU countries)
  • Speed estimation with camera calibration
  • Integration with external APIs (DMV lookup, etc.)

Contributing

We welcome contributions! Please see CONTRIBUTING.md for details on:

  • Setting up your development environment
  • Code style guidelines
  • Submitting pull requests
  • Reporting issues

License

This project is licensed under the MIT License - see the LICENSE file for details.

Note: This software is provided for educational and research purposes. Users are responsible for ensuring compliance with local laws and regulations regarding video surveillance and data collection.

Acknowledgments

  • YOLOv8 by Ultralytics
  • EasyOCR by JaidedAI
  • DeepSORT implementation
  • OpenCV community

Support

If you encounter any issues:

  1. Check the documentation:

  2. Debug steps:

    • Run python setup.py to verify installation
    • Check logs in logs/ directory
    • Verify CUDA compatibility: nvidia-smi
  3. Report issues:

    • Open an issue on GitHub with:
      • Error message and stack trace
      • System information (GPU, CUDA version, Python version)
      • Configuration file (config.yaml)
      • Log files from logs/ directory

Contact


⭐ If you find this project useful, please consider giving it a star on GitHub!

About

Automatic License Plate Recognition system with vehicle detection, tracking, and analytics. Built with Python, YOLOv8, and CUDA acceleration.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.8%
  • Shell 2.1%
  • Batchfile 2.1%