A comprehensive video analytics platform with real-time object detection, movement heatmap analysis, and object tracking capabilities. Powered by YOLO models for detection, with a React frontend and Flask+Celery backend.
- π Real-time Object Detection: Process videos and detect objects using latest YOLO models
- π₯ Heatmap Analysis: Generate motion heatmaps to visualize movement patterns over time
- π Statistical Analysis: Visualize detected object frequencies and detailed statistics
- π Object Tracking: Track objects across video frames with persistence
- β‘ Multiple Video Processing: Process multiple videos simultaneously with intuitive job management
- π οΈ Customizable Parameters: Adjust frame intervals and model selection for optimal results
- π± Responsive UI: Modern interface that works across devices
v2.mp4
Before getting started, ensure you have installed:
- Python 3.12
- Node.js & npm
- Git (with Git LFS)
- Redis (Celery message broker and result backend)
- Windows: Download Redis for Windows
- Ubuntu:
sudo apt update && sudo apt install redis-server
- Miniconda (recommended)
For significantly faster processing:
- CUDA Toolkit: Version 11.8
- cuDNN: Matching your CUDA (11.8) version 9.10.2
- PyTorch with CUDA: Installed via custom index
git clone https://github.com/realvoidgojo/VideoAnaltyics-Cyberthon.git
cd VideoAnaltyics-Cyberthon- YOLOv11 models repository
- Ultralytics website
- Place downloaded model(s) in the models directory
conda create -n video_env python=3.10 -y
conda activate video_env
conda install -c conda-forge opencv ffmpeg -y
conda install -c conda-forge ffmpeg=6.1.1=gpl* -y# With conda environment activated:
pip install -r requirements.txt
# For CUDA support (optional, replace cu116 with your CUDA version):
pip uninstall torch torchvision -y
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
# Create necessary directories
mkdir -p data hls_streamcd frontend
npm install# Windows: Start from the installed location
# Linux/macOS:
redis-server# From project root, in a new terminal:
conda activate video_env # If using conda
celery -A src.celery.celery_app worker --loglevel=info --pool=threads -c 4Adjust
-coption to match your CPU cores
# From project root, in a new terminal:
conda activate video_env # If using conda
python app.pyFlask will run at: http://127.0.0.1:5000
# From frontend directory, in a new terminal:
npm run devFrontend will be available at: http://localhost:5173
- Upload Videos: Use the upload interface to submit video files
- Choose Detection Settings:
- Select YOLO model (smaller models are faster)
- Set frame interval (higher values = faster processing, fewer detections)
- Enable heatmap generation
- Analysis Modes:
- Object Detection View: See detected objects with bounding boxes and statistics
- Heatmap Analysis: Visualize movement intensity and patterns over time
- Real-time object recognition with latest YOLO models
- Customizable confidence thresholds
- Detailed statistics of detected objects
- Frequency charts and object counts
- HLS video streaming for smooth playback
- Movement intensity visualization
- Temporal activity mapping
- Peak movement time identification
- Duration analysis
- Downloadable heatmap videos
VideoAnalytics-Cyberthon/
βββ app.py # Main Flask application
βββ src/ # Source code directory
β βββ celery.py # Celery application definition
β βββ video_processing.py # Frame extraction and preprocessing
β βββ object_detection.py # YOLO detection functions
β βββ heatmap_analysis.py # Heatmap generation and analysis
β βββ ...
βββ models/ # YOLO model files (.pt)
βββ data/ # Temporary video storage
βββ hls_stream/ # HLS video streaming files
βββ frontend/ # React frontend
β βββ src/ # React source code
β β βββ components/ # UI components
β β β βββ video/ # Video-related components
β β β βββ heatmap/ # Heatmap components
β β β βββ charts/ # Data visualization
β β β βββ ...
β β βββ ...
βββ setup/ # Setup documentation
βββ ...
Contributions are welcome! Please feel free to submit pull requests or open issues to suggest improvements or report bugs.
This project is licensed under the MIT License - see the LICENSE file for details.
- Ultralytics for YOLO models
- React and Vite for the frontend framework
- Flask for the backend API
- Celery for asynchronous task processing
For detailed setup instructions, see the setup guide