A real-time video surveillance system that detects intrusions in user-defined protected zones using the YOLO (You Only Look Once) object detection model via Darkflow. Built with Python, OpenCV, and TensorFlow.
This project was developed at EPT (Ecole Polytechnique de Tunisie) as part of an academic project on event detection in video streams. The system captures a live webcam feed, allows the user to define a "Protected Zone" on screen, and triggers alerts whenever a person is detected inside that zone during a configurable time window.
- Real-time person detection using the Tiny-YOLO-VOC pre-trained model
- Interactive protected zone selection - draw a rectangle on the video feed to define the monitored area
- Time-based monitoring - configure start/end times for zone protection via
parameters.txt - Intrusion logging - all detections are recorded with timestamps in
logfile.log - Video recording - detected events are saved to an output video file
- Configurable detection labels - specify which objects to detect via
labels_det.txt
Run the detection script with the YOLO model and weights:
python projet.py --model ../cfg/tiny-yolo-voc.cfg --load ../bin/tiny-yolo-voc.weights -sz 750The system loads the Tiny-YOLO-VOC model and builds the neural network architecture:
Once the model is loaded, the webcam feed starts with a real-time timestamp overlay:
Press i to enter zone selection mode, then draw a rectangle with the mouse to define the area to monitor. Press Enter or Space to confirm, or c to select the whole frame:
When a person enters the protected zone during the configured time window, the system displays an "Alert !!!" warning and logs the event:
parameters.txt - Defines the time window for zone monitoring:
logfile.log - Records every intrusion event with date and time:
labels_det.txt - Lists the object labels to detect (e.g., "person"):
After pressing q to quit, the system displays elapsed time, FPS, and saves the recorded video:
Projet events detection/
├── README.md # This file
├── src/ # Source code
│ ├── projet.py # Main detection script
│ ├── guide.md # Usage guide
│ ├── labels_det.txt # Labels to detect (e.g., "person")
│ ├── parameters.txt # Time window configuration
│ └── logfile.log # Detection log output
├── docs/ # Documentation
│ ├── Sujet Projet.docx # Project brief / subject
│ ├── Les étapes du projet.docx # Project milestones
│ ├── plan rapport.docx # Report outline
│ ├── Video Stream Analytics Using OpenCV.docx # OpenCV analytics guide
│ └── references/ # Research papers & bibliography
│ ├── yolo.pdf
│ ├── 2010CLF22089_-_LUVISON.pdf
│ ├── Human-detection-from-images-and-videos-*.pdf
│ ├── Combining-motion-and-appearance-cues-*.pdf
│ └── ... (additional pattern recognition papers)
├── images/ # Demo screenshots
│ ├── 01_run_command.png
│ ├── 02_loading_model.png
│ ├── ...
│ └── 13_final_output_summary.png
└── output/ # Recorded output videos
└── video.avi
- Python 3
- TensorFlow 1.x
- OpenCV 3.x
- NumPy
- imutils
- Darkflow - github.com/thtrieu/darkflow
-
Clone or download this repository.
-
Install Darkflow by following the instructions at thtrieu/darkflow.
-
Download the Tiny-YOLO-VOC weights and config files and place them in the appropriate directories (
bin/andcfg/). -
Install Python dependencies:
pip install tensorflow==1.x numpy opencv-python imutils
-
Configure detection labels - Edit
src/labels_det.txtto list the objects you want to detect (one per line). Default isperson. -
Configure time window - Edit
src/parameters.txtwith the start and end datetime for zone protection, using the formatYYYY-MM-DD HH:MM:SS. -
Run the program:
cd src/ python projet.py --model <path_to_model.cfg> --load <path_to_weights>
-
Interactive controls:
- Press
ito draw the protected zone - Press Enter/Space to confirm the zone selection
- Press
cto select the entire frame as the zone - Press
qto quit
- Press
| Argument | Short | Default | Description |
|---|---|---|---|
--model |
-m |
required | Path to YOLO model config file |
--load |
-l |
required | Path to YOLO weights file |
--confidence |
-conf |
0.2 | Minimum detection confidence |
--person |
-p |
0.3 | Person detection threshold |
--seize |
-sz |
650 | Window display size (px) |
--fps |
-f |
5 | Output video FPS |
--save |
-s |
video.avi | Output video file path |
--codec |
-cod |
XVID | Output video codec |
The system uses a Tiny-YOLO-VOC model for real-time object detection on each video frame. When the user defines a protected zone (via an interactive ROI selector), the program computes the overlap ratio (Intersection over Union) between each detected bounding box and the protected zone. If the overlap exceeds a threshold (5%) and the current time falls within the configured monitoring window, an intrusion alert is triggered, displayed on screen, and logged to logfile.log.
Marwen Kraiem - EPT (Ecole Polytechnique de Tunisie), 2018
Research papers used in this project are available in the docs/references/ folder, covering topics such as human detection, anomaly detection, pattern recognition, and the YOLO architecture.











