Skip to content

Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition

License

Notifications You must be signed in to change notification settings

rchcomm/fight_detection

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition.

This project is based on darknet_server. if you want to run this experiment take a look how to build here.

Fight Detection System Pipeline

Pose Estimation and Object Tracking

Made pipeline to get 2D pose time series data in video sequence.

In worker process, Pose Estimation is performed using OpenPose. Input image pass through the Pose_detector, and get the people object which packed up people joint coordinates. People object serialized and send to sink process.

In Sink process, people object convert to person objects. and every person object are send to Tracker. Tracker receive person object and produces object identities Using SORT(simple online and realtime tracking algorithm).

Finally, can get the joint time series data per each person. each person's Time series data is managed by queue container. so person object always maintain recent 32 frame.

Tracking Pipeline Time series data
  • Examples of Result (Pose Estimation and Object Tracking)

Collect action video

Collected Four Action type (Standing, Walking, Punching, Kicking) video.

Punching Action type video is a subset of the Berkeley Multimodal Human Action Database (MHAD) dataset. This video data is comprised of 12 subjects doing the punching actions for 5 repetitions, filmed from 4 angles. (http://tele-immersion.citris-uc.org/berkeley_mhad)

Others (Standing, Walking, Kicking) are subsets of the CMU Panoptic Dataset. In Range of Motion videos(171204_pose3, 171204_pose5, 171204_pose6), cut out 3 action type. I recorded timestamp per action type and cut out the video using python script(util/concat.py). This video data is comprised of 13 subjects doing the three actions, filmed from 31 angles. (http://domedb.perception.cs.cmu.edu/index.html)

0, 1, 10    // 0 : Standing, 1 : begin timestamp, 10: end timestamp
1, 11, 15   // 1 : Walking, 11 : begin timestamp, 15: end timestamp
3, 39, 46   // 3 : Kicking, 39 : begin timestamp, 46: end timestamp
  • Examples of Dataset (Stand & Walk)

Stand (CMU Panoptic Dataset)Walk (CMU Panoptic Dataset)
  • Examples of Dataset (Punch & Kick)

Punch (Berkeley MHAD Dataset)Kick (CMU Panoptic Dataset)

Make training dataset

Put action video data to tracking pipeline and get joint time series data per each person. this results (joint position) are processed to feature vector.

  • Angle : current frame joint angle

  • ΔPoint : A distance of prior frame joint point and current frame

  • ΔAngle : A change of prior frame joint angle and current frame.

  • Examples of feature vector (ΔPoint & ΔAngle)

ΔPoint ΔAngle
  • Overview of feature vector

Feature VectorOpenPose COCO output format
IDX 0 1 2 3 4 5 6 7
Angle 2-3 3-4 5-6 6-7 8-9 9-10 11-12 12-13
ΔAngle 2-3 3-4 5-6 6-7 8-9 9-10 11-12 12-13
ΔPoint 3 4 6 7 9 10 12 13
※ 2 : RShoulder, 3 : RElbow, 4 : RWrist, 5 : LShoulder, 6 : LElbow, 7 : LWrist, 8 : RHip, 9 : RKnee, 10 : RAnkle, 11 : LHip, 12 : LKnee, 13 : LAnkle

Finally get each frame feature vector and then make action training data which consist of 32 frames feature vector. training datas are overlapped by 26 frames. so we got the four type action data set. A summary of the dataset is:

  • Standing : 7474 (7474 : pose3_stand) * 32 frame
  • Walking : 4213 (854 : pose3_walk, 3359 : pose6_walk) * 32 frame
  • Punching : 2187 (1115 : mhad_punch. 1072 : mhad_punch_flip) * 32 frame
  • Kicking : 4694 (2558 : pose3_kick, 2136 : pose6_kick) * 32 frame
  • total : 18573 * 32 frames (https://drive.google.com/open?id=1ZNJDzQUjo2lDPwGoVkRLg77eA57dKUqx)

RNN Training and Result

The network used in this experiment is based on that of :

training was run for 300 epochs with a batch size of 1024. (weights/action.h5)

After training, To get action recognition result in real time, made action recognition pipeline. Sink process send each person's time series feature vector to action process as string. Action process put received data into RNN network and send back results of prediction. (0 : Standing, 1 : Walking, 2 : Punching, 3 : Kicking)

Action Recognition Pipeline RNN Model
  • Examples of Result (RNN Action Recognition)

standing Walking Punching Kicking

Fight Detection

This stage check that person who kick or punch is hitting someone. if some person has hit other, those people set enemy each other. System count it as fight and then track them until they exist in frame.

Fight Detection Pipeline
  • Examples of Result

Fighting Championship CCTV Video
Sparring video A Sparring video B
  • Examples of Result (Failure case)

Fake Person Small Person

References

About

Real time Fight Detection Based on 2D Pose Estimation and RNN Action Recognition

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 94.7%
  • Python 4.0%
  • Makefile 1.3%