Skip to content
This repository has been archived by the owner on Aug 3, 2023. It is now read-only.

Motion detection in UNT. Dataset creation for yolov4. Dataset labeling for gadgets and other devices recognition. Yolov4 training with a labeled dataset.

Notifications You must be signed in to change notification settings

aktumar/DOP_human_detection

Repository files navigation

Motion detection in UNT

Windows Ubuntu Python OpenCV

  1. Open terminal and clone git repository.
git clone https://github.com/aktumar/DOP_human_detection.git
  1. Create the virtual environment.
virtualenv venv
  1. Activate the virtual environment.

Windows

venv\Scripts\activate

Ubuntu/Linux

source venv/bin/activate
  1. Run a requirements.txt file to install project’s dependencies
pip install -r requirements.txt

To see all the logs on the user interface, before running, you need to open cutelog with the command:

start cutelog

To run program use following command:

  1. Use your local camera. Write 'true' to run camera
python run.py -c true
  1. Use RTSP with given .ini file. Choose one computer(camera).

    .ini file filling example: rtsp://admin:12345@192.168.1.210:554/Streaming/Channels/101

[10]
USERNAME = admin
PASSWORD = 12345
IP_ADDRESS = 192.168.1.210
PORT = 554
DIR = Streaming/Channels
COMPUTER = 101
python run.py -u 10
  1. Use local video path. Make sure that you have entered the correct directory for the video folder.
python run.py -v 1.mp4

Constantly using a neural network to recognize and monitor a person's actions in order to detect cheating in the UNT can be extremely laborious. To reduce server load, you can preprocess frames and send only specific parts of the frames for recognition. In this area, an examinee is typically singled out for performing certain actions that are different from the standard state. The following is a description of this algorithm:

  1. The frame's parts that differ from the previous one are identified.
  2. These areas are highlighted by boxes and include both significant and minor changes. As a result, you can eliminate even the tiniest details at this stage.
  3. When we have a lot of boxes, we can classify them by determining the coordinates of their neighbors. At this point, each of the box's four corners is involved. The window's parameters are used to determine the maximum distance between neighboring points.
  4. Once the boxes have been classified, each cluster is merged into a single box.
  5. However, it is worth considering that most of the frames contain third-party movements, such as a passerby or another examiner in the background. This is accomplished by selecting the largest box in terms of area.

About

Motion detection in UNT. Dataset creation for yolov4. Dataset labeling for gadgets and other devices recognition. Yolov4 training with a labeled dataset.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published