A comprehensive Automatic License Plate Recognition (ALPR) system with vehicle attribute detection, tracking, and analytics. Built with Python and CUDA acceleration for real-time performance.
- Features
- Demo
- System Requirements
- Quick Start
- Installation
- Configuration
- Usage
- Output
- Performance
- Troubleshooting
- Advanced Features
- Contributing
- License
- Acknowledgments
Real-time vehicle detection and license plate recognition with tracking
Sample Output:
Track 2: Plate=R183JF (0.52), Color=Black, Type=vehicle, Direction=Stationary, Dwell=4.3s
✅ License Plate Detection & Recognition - EasyOCR with GPU acceleration ✅ Vehicle Detection - YOLOv8 detects cars, trucks, buses, motorcycles ✅ Vehicle Color Detection - 8 colors (Red, Blue, Green, Yellow, White, Black, Gray, Silver) ✅ Vehicle Make/Model - Extensible framework (placeholder included) ✅ Object Tracking - DeepSORT maintains vehicle IDs across frames ✅ Dwell Time Calculation - Seconds each vehicle spends in view ✅ Direction of Travel - 8 directions (N, NE, E, SE, S, SW, W, NW) ✅ Blur Quality Assessment - Flags blurry plates and faces ✅ Privacy Protection - Optional face blurring ✅ Multiple Input Sources - Webcam, video files, or RTSP streams ✅ Complete Data Logging - JSON metadata + annotated images
Output Example:
Track 2: Plate=R183JF (0.52), Color=Black, Type=vehicle, Direction=Stationary, Dwell=4.3s
- NVIDIA GPU with CUDA support (compute capability 3.5+)
- Minimum 8GB RAM
- USB camera or IP camera (RTSP stream)
- Python 3.10 or 3.11
- CUDA Toolkit 11.8 or higher
- cuDNN 8.6 or higher
The easiest way to get started on Windows:
# Run the automated setup script
quick_start.batThis will:
- Create a virtual environment
- Install all dependencies (including CUDA-enabled PyTorch)
- Verify your setup
- Offer to start the application
nvidia-smiLook for "CUDA Version: X.X" in the output.
Using pyenv (recommended):
# Install Python 3.10.11 (best compatibility)
pyenv install 3.10.11
pyenv local 3.10.11
# Verify
python --version# Create venv
python -m venv venv
# Activate it
# Windows:
venv\Scripts\activate
# Linux/Mac:
source venv/bin/activateYou should see (venv) in your prompt.
Match your CUDA version from step 1:
# CUDA 11.8
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
# CUDA 12.1
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
# CUDA 12.4+ (or CUDA 13.0)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124
# CPU only (no GPU)
pip install torch torchvisionpip install -r requirements.txtThis will take several minutes (downloads ~2-3GB of packages).
python setup.pyThis checks:
- ✓ Python version
- ✓ CUDA availability
- ✓ All dependencies installed
- ✓ Downloads YOLOv8 models
- ✓ Tests camera access
Edit config.yaml to configure the system:
camera:
source: 0 # 0 for webcam, or "rtsp://user:pass@ip:port/stream"
width: 1920
height: 1080
fps: 30
name: "Camera_1"cuda:
enabled: true
device: 0 # GPU device ID (0 for first GPU)alpr:
confidence_threshold: 0.5 # Minimum confidence for plate reading
min_plate_width: 60
min_plate_height: 20
vehicle:
confidence_threshold: 0.4 # Minimum confidence for vehicle detection# Make sure venv is activated
venv\Scripts\activate # Windows
# source venv/bin/activate # Linux/Mac
# Run the application
python main.pyThe system will:
- ✓ Initialize CUDA and load models on GPU
- ✓ Open the camera/video source
- ✓ Start detecting vehicles and plates in real-time
- ✓ Display annotated video with overlays
- ✓ Save detections to
detections/folder
q- Quit the application (when video window is active)
Webcam (Default):
# config.yaml
camera:
source: 0 # 0 = first webcam, 1 = second, etc.Video File:
camera:
source: "D:\\alpr\\video.mp4" # Absolute path
# OR
source: "traffic_video.mp4" # Relative pathRTSP IP Camera:
camera:
source: "rtsp://admin:password@192.168.1.100:554/stream"Real-time logging shows detection events:
2025-10-09 21:33:48 | INFO | Using device: cuda:0
2025-10-09 21:33:48 | INFO | CUDA available: NVIDIA GeForce RTX 3090
2025-10-09 21:33:50 | SUCCESS | ALPR system initialized successfully
2025-10-09 21:33:50 | INFO | Starting ALPR system...
2025-10-09 21:33:56 | INFO | Track 2: Plate=R183JF (0.52), Color=Black, Type=vehicle, Direction=Stationary, Dwell=4.3s
The system automatically saves to the detections/ directory:
Images:
detection_20251009_213355_987612.jpg- Annotated frame with bounding boxes, plate text, colors, etc.
Metadata:
detection_20251009_213355_987612.json- Structured data for each detection
{
"timestamp": "2025-10-09T21:33:55.987612",
"detections": [
{
"track_id": "2",
"bbox": [23.6, 78.4, 501.8, 449.3],
"vehicle_type": "vehicle",
"plate": {
"text": "R183JF",
"confidence": 0.52,
"bbox": [74, 182, 145, 34]
},
"color": "Black",
"make": "Unknown",
"model": "Unknown",
"dwell_time_seconds": 4.3,
"direction": "Stationary",
"plate_blur_score": 89.4,
"plate_is_blurry": false
}
]
}The display window shows:
- Green boxes - Vehicle detections
- Track IDs - Persistent vehicle identifiers
- Yellow text - License plate (if detected and sharp)
- Orange text - Blurry plate warning
- Vehicle info - Color, type, direction, dwell time
- Statistics - Total detections and plate count
alpr/
├── main.py # Main application entry point
├── config.yaml # Configuration file
├── requirements.txt # Python dependencies
├── README.md # This file
├── src/
│ ├── __init__.py
│ ├── camera_manager.py # Camera capture and management
│ ├── alpr_detector.py # License plate detection and OCR
│ ├── vehicle_detector.py # Vehicle detection and attributes
│ ├── tracker.py # Object tracking (DeepSORT)
│ ├── direction_analyzer.py # Direction of travel analysis
│ ├── blur_detector.py # Blur detection for quality control
│ └── visualizer.py # Visualization and output
├── logs/ # Application logs
└── detections/ # Saved detections (images + JSON)
If you encounter GPU memory issues, try:
-
Use a smaller YOLO model:
vehicle: model: "yolov8n.pt" # Nano model (fastest, less accurate)
-
Reduce camera resolution:
camera: width: 1280 height: 720
Expected performance on RTX 3060:
- ~30 FPS with YOLOv8n (nano)
- ~20 FPS with YOLOv8m (medium)
- ~10 FPS with YOLOv8l (large)
All settings are in config.yaml:
camera:
source: 0 # Webcam index, video path, or RTSP URL
width: 1920 # Requested resolution (may differ from actual)
height: 1080
fps: 30vehicle:
confidence_threshold: 0.4 # Lower = more detections (+ false positives)
model: "yolov8n.pt" # n=fast, m=balanced, l=accurate
alpr:
confidence_threshold: 0.5 # OCR confidence minimum
min_plate_width: 60 # Minimum pixels (adjust for camera distance)
min_plate_height: 20tracking:
max_age: 30 # Frames to keep track without detection
min_hits: 3 # Detections needed to confirm track
iou_threshold: 0.3 # Matching strictnessblur:
face_threshold: 100 # Laplacian variance threshold
plate_threshold: 50 # Lower = more permissiveCheck CUDA:
nvidia-smi
python -c "import torch; print(torch.cuda.is_available())"Fix:
# Uninstall and reinstall PyTorch with matching CUDA version
pip uninstall torch torchvision
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124Symptoms: Failed to open camera source
Solutions:
# Test camera
python -c "import cv2; cap = cv2.VideoCapture(0); print(cap.isOpened())"
# Try different indices
source: 1 # or 2, 3...
# For video files, use absolute path
source: "D:\\alpr\\video.mp4"Solutions:
-
Increase preprocessing scale in
src/alpr_detector.py:120:scale_factor = 3 # From 2 to 3
-
Adjust plate size minimums:
alpr: min_plate_width: 80 # Increase from 60 min_plate_height: 30 # Increase from 20
-
Filter by blur score:
blur: plate_threshold: 100 # Reject blurrier plates
Use faster model:
vehicle:
model: "yolov8n.pt" # Fastest (30 FPS on RTX 3090)Reduce resolution:
camera:
width: 1280
height: 720Process every other frame (modify main.py:run()):
if frame_count % 2 == 0:
results, tracks = self.process_frame(frame)Symptoms: Same vehicle gets different IDs
Solutions:
tracking:
max_age: 60 # Increase from 30
iou_threshold: 0.2 # Decrease from 0.3 (stricter)To enable vehicle make/model detection, you'll need to integrate a pre-trained model:
- Download a vehicle make/model classifier (e.g., from CompCars dataset)
- Update
src/vehicle_detector.pyin the_detect_make_model()method - Load and run inference with the model
Enable database storage in config.yaml:
database:
enabled: true
connection_string: "sqlite:///alpr.db"Then implement the database schema and logging in a new src/database.py module.
- Multi-camera support (2+ cameras)
- Vehicle make/model detection with pre-trained model
- Database integration for long-term storage
- Web dashboard for monitoring
- Real-time alerts and notifications
- License plate region validation (US states, EU countries)
- Speed estimation with camera calibration
- Integration with external APIs (DMV lookup, etc.)
We welcome contributions! Please see CONTRIBUTING.md for details on:
- Setting up your development environment
- Code style guidelines
- Submitting pull requests
- Reporting issues
This project is licensed under the MIT License - see the LICENSE file for details.
Note: This software is provided for educational and research purposes. Users are responsible for ensuring compliance with local laws and regulations regarding video surveillance and data collection.
- YOLOv8 by Ultralytics
- EasyOCR by JaidedAI
- DeepSORT implementation
- OpenCV community
If you encounter any issues:
-
Check the documentation:
- README.md - Installation and usage
- QUICK_REFERENCE.md - Command cheat sheet
- CLAUDE.md - Technical deep dive
-
Debug steps:
- Run
python setup.pyto verify installation - Check logs in
logs/directory - Verify CUDA compatibility:
nvidia-smi
- Run
-
Report issues:
- Open an issue on GitHub with:
- Error message and stack trace
- System information (GPU, CUDA version, Python version)
- Configuration file (
config.yaml) - Log files from
logs/directory
- Open an issue on GitHub with:
- GitHub Repository: https://github.com/suteny0r/ALPR
- Issues: https://github.com/suteny0r/ALPR/issues
- Email: suteny0r@gmail.com
⭐ If you find this project useful, please consider giving it a star on GitHub!