An intelligent real-time wildfire detection system powered by YOLOv8 deep learning model. Detect fires in images and video streams with high accuracy and automated alert capabilities.
Real-time fire detection in action
|
|
| Metric | Value | Hardware |
|---|---|---|
| mAP@0.5 | 92.3% | - |
| Precision | 89.4% | - |
| Recall | 91.2% | - |
| Inference Time | 45ms | GPU (RTX 3090) |
| Inference Time | 180ms | CPU (i7-12700) |
| FPS | 125 | GPU |
| Model Size | 6.2 MB | YOLOv8n |
βββββββββββββββββββββββββββ
β Input Sources β
β (Image/Video/Camera) β
βββββββββββββ¬ββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββ
β Preprocessing β
β (Resize, Normalize) β
βββββββββββββ¬ββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββ
β YOLOv8 Model β
β (Fire Detection) β
βββββββββββββ¬ββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββ
β Post-processing β
β (NMS, Filtering) β
βββββββββββββ¬ββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββ
β FastAPI Backend β
β (REST API Endpoints) β
βββββββββββββ¬ββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββ
β Alert System β
β (Email/SMS/Dashboard) β
βββββββββββββββββββββββββββ
- Python 3.9 or higher
- pip package manager
- (Optional) CUDA-capable GPU for faster inference
- (Optional) Webcam for live detection
-
Clone the repository
git clone https://github.com/SalimTag/firedetection.git cd firedetection -
Install dependencies
pip install -r requirements.txt
-
Download the trained model (if not included)
# Option 1: Download from releases wget https://github.com/SalimTag/firedetection/releases/download/v1.0/fire_yolov8.pt -P models/ # Option 2: Train your own model (see Training section)
-
Configure environment variables
cp .env.example .env # Edit .env with your configuration
uvicorn main:app --reload --host 0.0.0.0 --port 8000Access the API:
- API Server: http://localhost:8000
- Interactive Docs: http://localhost:8000/docs
- Alternative Docs: http://localhost:8000/redoc
from ultralytics import YOLO
# Load model
model = YOLO('models/fire_yolov8.pt')
# Run inference
results = model('path/to/image.jpg')
# Display results
results[0].show()Detect fire in an image:
curl -X POST "http://localhost:8000/api/detect" \
-F "file=@forest_fire.jpg"Response:
{
"detections": [
{
"class": "fire",
"confidence": 0.87,
"bbox": [120, 45, 340, 280]
}
],
"processing_time_ms": 45,
"alert_sent": true
}Basic detection:
import requests
# Upload and detect
with open('image.jpg', 'rb') as f:
response = requests.post(
'http://localhost:8000/api/detect',
files={'file': f}
)
result = response.json()
print(f"Found {len(result['detections'])} fire instances")
print(f"Confidence: {result['detections'][0]['confidence']:.2%}")Live camera feed:
import cv2
import requests
from io import BytesIO
cap = cv2.VideoCapture(0) # 0 for webcam
while True:
ret, frame = cap.read()
if not ret:
break
# Encode frame
_, buffer = cv2.imencode('.jpg', frame)
img_bytes = BytesIO(buffer)
# Send to API
response = requests.post(
'http://localhost:8000/api/detect',
files={'file': ('frame.jpg', img_bytes, 'image/jpeg')}
)
detections = response.json()['detections']
# Draw bounding boxes
for det in detections:
x1, y1, x2, y2 = det['bbox']
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 2)
label = f"Fire {det['confidence']:.2f}"
cv2.putText(frame, label, (x1, y1-10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.imshow('Fire Detection', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()from ultralytics import YOLO
model = YOLO('models/fire_yolov8.pt')
# Process video
results = model('forest_surveillance.mp4', save=True)
# Results saved to runs/detect/predict/firedetection/
βββ backend/
β βββ main.py # FastAPI application entry point
β βββ routers/
β β βββ detection.py # Detection endpoints
β β βββ alerts.py # Alert management
β βββ services/
β β βββ detector.py # YOLOv8 detection service
β β βββ alert_service.py # Notification service
β β βββ video_processor.py # Video stream processing
β βββ models/
β β βββ fire_yolov8.pt # Trained model weights
β βββ utils/
β β βββ image_utils.py # Image preprocessing
β β βββ visualization.py # Result visualization
β βββ requirements.txt # Python dependencies
βββ frontend/ # (Optional) Web interface
βββ notebooks/
β βββ training.ipynb # Model training notebook
β βββ evaluation.ipynb # Performance evaluation
βββ docs/
β βββ demo.gif # Demo animation
β βββ screenshots/ # UI screenshots
βββ tests/
β βββ test_detector.py # Unit tests
βββ .env.example # Environment variables template
βββ .gitignore
βββ docker-compose.yml # Docker deployment
βββ Dockerfile
βββ LICENSE
βββ README.md
-
Organize your dataset:
dataset/ βββ images/ β βββ train/ β βββ val/ β βββ test/ βββ labels/ βββ train/ βββ val/ βββ test/ -
Create dataset configuration (
fire_dataset.yaml):path: ./dataset train: images/train val: images/val test: images/test nc: 1 # number of classes names: ['fire']
from ultralytics import YOLO
# Load pretrained model
model = YOLO('yolov8n.pt') # nano (fastest)
# or model = YOLO('yolov8m.pt') # medium (more accurate)
# Train
results = model.train(
data='fire_dataset.yaml',
epochs=100,
imgsz=640,
batch=16,
device=0, # GPU device (or 'cpu')
patience=20,
save=True,
name='fire_detection'
)
# Evaluate
metrics = model.val()
print(f"mAP@0.5: {metrics.box.map50}")
print(f"mAP@0.5:0.95: {metrics.box.map}")
# Export (optional)
model.export(format='onnx') # for deploymentThe dataset preparation code is available in the separate repository: π¦ Fire-Detection Repository
Features:
- Automated data augmentation (rotation, flip, brightness, etc.)
- Dataset splitting utilities
- Label format conversion
- Data quality validation
# Build and start services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down# Build image
docker build -t firedetection:latest .
# Run container
docker run -p 8000:8000 firedetection:latestdocker-compose.yml:
version: '3.8'
services:
api:
build: .
ports:
- "8000:8000"
environment:
- MODEL_PATH=/app/models/fire_yolov8.pt
- CONFIDENCE_THRESHOLD=0.5
volumes:
- ./models:/app/models
restart: unless-stoppedCreate a .env file in the project root:
# Model Configuration
MODEL_PATH=models/fire_yolov8.pt
CONFIDENCE_THRESHOLD=0.5
IOU_THRESHOLD=0.45
# API Configuration
API_HOST=0.0.0.0
API_PORT=8000
CORS_ORIGINS=*
# Alert Configuration
ENABLE_ALERTS=true
ALERT_EMAIL=alerts@example.com
# SMTP Settings (for email alerts)
SMTP_SERVER=smtp.gmail.com
SMTP_PORT=587
SMTP_USERNAME=your-email@gmail.com
SMTP_PASSWORD=your-app-password
# Optional: Twilio SMS Alerts
TWILIO_ACCOUNT_SID=your-account-sid
TWILIO_AUTH_TOKEN=your-auth-token
TWILIO_PHONE_NUMBER=+1234567890
ALERT_PHONE_NUMBER=+1234567890# Install test dependencies
pip install pytest pytest-cov
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=. --cov-report=html# tests/test_detector.py
import pytest
from services.detector import FireDetector
import numpy as np
def test_detector_initialization():
detector = FireDetector('models/fire_yolov8.pt')
assert detector.confidence_threshold == 0.5
def test_fire_detection():
detector = FireDetector('models/fire_yolov8.pt')
test_image = np.zeros((640, 640, 3), dtype=np.uint8)
results = detector.detect_fire(test_image)
assert isinstance(results, list)| Method | Endpoint | Description |
|---|---|---|
| GET | / |
Health check |
| POST | /api/detect |
Detect fire in single image |
| POST | /api/detect-video |
Process video file |
| POST | /api/detect-stream |
Process live stream |
| GET | /api/alerts |
Get alert history |
| GET | /api/stats |
Get detection statistics |
curl -X POST "http://localhost:8000/api/detect" \
-H "accept: application/json" \
-F "file=@test_image.jpg" \
-F "confidence_threshold=0.6"{
"success": true,
"detections": [
{
"class": "fire",
"confidence": 0.87,
"bbox": [120, 45, 340, 280],
"area": 51700,
"timestamp": "2025-02-17T10:30:00Z"
}
],
"image_size": [1920, 1080],
"processing_time_ms": 45,
"model_version": "YOLOv8n",
"alert_sent": true
}- π² Forest Fire Monitoring - Integrate with surveillance cameras
- π Industrial Safety - Monitor factories and warehouses
- ποΈ Smart Cities - City-wide fire detection network
- π Drone Surveillance - Aerial fire detection
- π’ Building Safety - Commercial building monitoring
- π¬ Research - Computer vision and AI research
Low detection accuracy
- Adjust
CONFIDENCE_THRESHOLDin.env(try 0.3-0.7) - Ensure good lighting in input images
- Retrain model with more diverse dataset
- Check if fire type is represented in training data
Slow inference speed
- Use GPU instead of CPU (10x faster)
- Reduce image resolution
- Use YOLOv8n (nano) instead of larger models
- Enable batch processing for multiple images
False positives (detecting non-fire objects)
- Increase
CONFIDENCE_THRESHOLD - Add more negative examples to training data
- Implement temporal filtering for video streams
- Fine-tune model on your specific environment
API not accessible
- Check if port 8000 is available:
netstat -an | grep 8000 - Verify firewall settings
- Try different host:
--host 127.0.0.1or--host 0.0.0.0 - Check logs for error messages
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add: Amazing new feature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Follow PEP 8 style guide for Python
- Add docstrings to all functions
- Include unit tests for new features
- Update documentation as needed
- Keep commits atomic and well-described
- Mobile app (iOS/Android)
- Smoke detection in addition to fire
- Thermal imaging camera support
- Multi-language alert support
- Integration with emergency services APIs
- Real-time dashboard with maps
- Edge deployment (Raspberry Pi, NVIDIA Jetson)
- Cloud-based model serving
- Drone integration APIs
- Multi-camera synchronization
This project is licensed under the MIT License - see the LICENSE file for details.
MIT License
Copyright (c) 2025 Salim Tagemouati
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction...
Salim Tagemouati
- π GitHub: @SalimTag
- πΌ LinkedIn: Salim Tagemouati
- π§ Email: salim.tagemouati@example.com
- π Location: Morocco
- π‘ Open to opportunities in AI/ML, Computer Vision, and Software Engineering
- Ultralytics YOLOv8 - Outstanding object detection framework
- FastAPI - Modern, fast web framework
- OpenCV - Computer vision library
- Fire Dataset Contributors - For providing training data
- Open Source Community - For tools and inspiration
If you use this project in your research or work, please cite:
@software{firedetection2025,
author = {Tagemouati, Salim},
title = {Fire Detection System: AI-Powered Wildfire Detection using YOLOv8},
year = {2025},
publisher = {GitHub},
url = {https://github.com/SalimTag/firedetection}
}Status: π’ Active Development
- Fire-Detection - Dataset preparation and augmentation tools
- Fire-Detection-backend - Backend API implementation
β Star this repository if you find it useful!
Made with β€οΈ by Salim Tagemouati
π₯ Helping protect forests and lives through AI π₯