A full-stack real-time drone detection system capable of tracking multiple drone targets with 99% accuracy, utilizing YOLO, DeepSORT, React.js, and a FastAPI backend.
- Real-time Video Feed: Live camera stream with drone detection overlay
- YOLO + DeepSORT Integration: Advanced object detection and tracking
- Real-time Notifications: WebSocket-based instant alerts for new detections
- Interactive Dashboard: Modern React UI with Material-UI components
- Detection Database: SQLite storage for all detection records
- Interactive Map: Leaflet map showing drone detection locations
- Statistics Tracking: Daily detection counts and analytics
- Python 3.8+
- Node.js 16+
- Camera/Webcam
- YOLO model file (
best.pt
)
-
CUDA GPU Setup:
If you want to use CUDA GPU for acceleration, please run:# Activate your backend virtual environment first: # On Windows: venv\Scripts\activate # On macOS/Linux: source venv/bin/activate # Then run: pip uninstall -y torch torchvision torchaudio pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Or, get the appropriate version for your system from the official PyTorch website.
-
Changing Camera Feed Source:
To change the camera/video feed source, edit line 304 inbackend/tracker.py
and change the parameter incv2.VideoCapture(#source number)
accordingly.
You can use the developer's custom-trained YOLOv8l model and the dataset:
-
Custom YOLOv8l Model:
Download yolov8l_ep10 -
Training Dataset:
Drone Detection YOLO Dataset
# Navigate to backend directory
cd backend
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
Place your trained YOLO model file (best.pt
) in the backend directory, or update the MODEL_PATH
in main.py
:
MODEL_PATH = "path/to/your/model.pt"
# Start FastAPI server
uvicorn main:app --reload --host 0.0.0.0 --port 8000
The backend will be available at: http://localhost:8000
# Navigate to frontend directory
cd ../frontend
# Install dependencies
npm install
# Start development server
npm run dev
The frontend will be available at: http://localhost:3000
- Start the Application: Open
http://localhost:3000
in your browser - Start Camera: Click the "Start Camera" button to begin detection
- View Live Feed: Watch the real-time video with detection overlays
- Monitor Detections: See new drone alerts and view detection statistics
- Check Map: View detection locations on the interactive map
- Review Data: Browse today's detections in the data table
POST /camera/start
- Start camera trackingPOST /camera/stop
- Stop camera trackingGET /camera/status
- Get camera status
GET /detections/today
- Get today's detectionsGET /detections/
- Get all detections (with pagination)GET /detections/date/{date}
- Get detections for specific dateDELETE /detections/{id}
- Delete detection
GET /video
- Video stream endpointWebSocket /ws
- Real-time updates
GET /health
- Health checkGET /
- API documentation
drone-tracking/
βββ backend/
β βββ main.py # FastAPI application
β βββ tracker.py # DroneTracker class
β βββ models.py # Database models
β βββ database.py # Database configuration
β βββ requirements.txt # Python dependencies
β βββ static/
β βββ index.html # Backend test page
βββ frontend/
βββ src/
β βββ components/ # React components
β βββ hooks/ # Custom hooks
β βββ services/ # API services
β βββ utils/ # Utilities
β βββ App.jsx # Main app component
β βββ main.jsx # React entry point
βββ package.json # Node dependencies
βββ vite.config.js # Vite configuration
Edit backend/main.py
to configure:
- Model path:
MODEL_PATH = "your-model.pt"
- Confidence threshold:
confidence_threshold=0.5
- Database URL: Set
DATABASE_URL
environment variable
Edit frontend/src/utils/constants.js
to configure:
- API base URL
- WebSocket URL
- Map settings
- Notification settings
Camera not working:
- Check camera permissions
- Verify camera is not in use by another application
- Try different camera index in
tracker.py
Model not found:
- Ensure
best.pt
file exists in backend directory - Check file permissions
- Verify model format is compatible
Connection issues:
- Check if backend is running on port 8000
- Verify frontend proxy configuration in
vite.config.js
- Check firewall settings
WebSocket connection failed:
- Ensure both frontend and backend are running
- Check browser console for connection errors
- Verify WebSocket URL in constants
- Reduce video resolution in
tracker.py
for better performance - Adjust confidence threshold to reduce false positives
- Limit frame rate for lower CPU usage
- Use GPU acceleration if available with CUDA
- Backend: Add new endpoints in
main.py
- Frontend: Create new components in
src/components/
- Database: Update models in
models.py
- Real-time: Extend WebSocket handlers
# Backend tests
cd backend
python -m pytest
# Frontend tests
cd frontend
npm test
# Build frontend
cd frontend
npm run build
# Deploy backend
cd backend
pip install gunicorn
gunicorn main:app --workers 4 --worker-class uvicorn.workers.UvicornWorker
This project is licensed under the MIT License.
- Fork the repository
- Create feature branch
- Commit changes
- Push to branch
- Create Pull Request
For issues and questions:
- Check the troubleshooting section
- Review API documentation at
http://localhost:8000/docs
- Create an issue on GitHub
You can also contact the developer:
- Name: Nitish Biswas
- Email: nitishbiswas066@gmail.com
Note: Make sure to replace best.pt
with your actual YOLO model file trained for drone detection.