Lightweight human detection model generated using a high-quality human dataset. I annotated all the data by myself. Extreme resistance to blur and occlusion. In addition, the recognition rate at short, medium, and long distances has been greatly enhanced. The camera's resistance to darkness and halation has been greatly improved.
Head
does not mean Face
. Thus, the entire head is detected rather than a narrow region of the face. This makes it possible to detect all 360° head orientations.
https://github.com/PINTO0309/PINTO_model_zoo/tree/main/423_6DRepNet360
281608305-d8dfd24f-7af1-4865-a760-56c490d186f1.mp4
The advantage of being able to detect hands with high accuracy is that it makes it possible to detect key points on the fingers as correctly as possible. The video below is processed by converting the MediaPipe tflite file to ONNX, so the performance of keypoint detection is not very high. It is assumed that information can be acquired quite robustly when combined with a highly accurate keypoint detection model focused on the hand region. It would be realistic to use the distance in the Z direction, which represents depth, in combination with physical information such as ToF, rather than relying on model estimation. To obtain as accurate a three-dimensional value as possible, including depth, sparse positional information on a two-dimensional plane, such as skeletal detection, is likely to break down the algorithm. This has the advantage that unstable depths can be easily corrected by a simple algorithm by capturing each part of the body in planes, as a countermeasure to the phenomenon that when information acquired from a depth camera (ToF or stereo camera parallax measurement) is used at any one point, the values are affected by noise and become unstable due to environmental noise.
The method of detecting 133 skeletal keypoints at once gives the impression that the process is very heavy because it requires batch or loop processing to calculate heat maps for multiple human bounding boxes detected by the object detection model. I also feel that the computational cost is high because complex affine transformations and other coordinate transformation processes must be performed on large areas of the entire body. However, this is not my negative view of a model that detects 133 keypoints, only that it is computationally expensive to run on an unpowered edge device.
https://github.com/PINTO0309/hand_landmark
282066390-9e4e188b-5c44-46fc-8328-21ae8a122971.1.mp4
-
COCO-Hand (14,667 Images, 66,903 labels, All re-annotated manually)
-
I am adding my own enhancement data to COCO-Hand and re-annotating all images. In other words, only COCO images were cited and no annotation data were cited.
-
I have no plans to publish my own dataset.
body_label_count: 30,729 labels head_label_count: 26,268 labels hand_label_count: 18,087 labels =============================== Total: 66,903 labels Total: 14,667 images
-
The process of work and verification can be seen in my working notes below. However, a detailed explanation is not given.
Halfway compromises are never acceptable.
-
Python 3.10
-
onnx 1.14.1+
-
onnxruntime-gpu v1.16.1 (TensorRT Execution Provider Enabled Binary. See: onnxruntime-gpu v1.16.1 + CUDA 11.8 + TensorRT 8.5.3 build (RTX3070))
-
opencv-contrib-python 4.9.0.80
-
numpy 1.24.3
-
TensorRT 8.5.3-1+cuda11.8
# Common ############################################ pip install opencv-contrib-python numpy onnx # For ONNX ########################################## pip uninstall onnxruntime onnxruntime-gpu pip install onnxruntime or pip install onnxruntime-gpu
-
Demonstration of models with built-in post-processing (Float32/Float16)
usage: demo_yolox_onnx_tfite.py \ [-h] \ [-m MODEL] \ [-v VIDEO] \ [-ep {cpu,cuda,tensorrt}] \ [-dvw] options: -h, --help show this help message and exit -m MODEL, --model MODEL ONNX/TFLite file path for YOLOX. -v VIDEO, --video VIDEO Video file path or camera index. -ep {cpu,cuda,tensorrt}, --execution_provider {cpu,cuda,tensorrt} Execution provider for ONNXRuntime. -dvw, --disable_video_writer Eliminates the file I/O load associated with automatic recording to MP4. Devices that use a MicroSD card or similar for main storage can speed up overall processing.
-
640x480 TensorRT
python demo/demo_yolox_onnx_tfite.py \ -m yolox_x_body_head_hand_face_0076_0.5228_post_1x3x480x640.onnx \ -v 0 \ -ep tensorrt
output_body_head_hand_face_x.mp4
-
640x480 TensorRT - Intense motion blur
python demo/demo_yolox_onnx_tfite.py \ -m yolox_x_body_head_hand_face_0076_0.5228_post_1x3x480x640.onnx \ -v 0 \ -ep tensorrt
output_body_head_hand_face_x_motion.mp4
-
YOLOX-Body-Head-Hand-Face - Nano
# 640x640 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.382 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.692 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.372 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.260 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.572 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.691 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.138 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.365 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.460 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.348 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.654 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.747 per class AP: | class | AP | class | AP | class | AP | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 36.701 | head | 46.252 | hand | 30.817 | | face | 39.069 | | | | | per class AR: | class | AR | class | AR | class | AR | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 45.373 | head | 51.447 | hand | 42.447 | | face | 44.588 | | | | |
-
YOLOX-Body-Head-Hand-Face - Tiny
# 640x640 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.428 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.739 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.436 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.301 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.622 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.739 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.148 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.396 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.491 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.380 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.685 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.776 per class AP: | class | AP | class | AP | class | AP | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 42.022 | head | 48.629 | hand | 36.137 | | face | 44.376 | | | | | per class AR: | class | AR | class | AR | class | AR | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 49.558 | head | 53.405 | hand | 44.791 | | face | 48.730 | | | | |
-
YOLOX-Body-Head-Hand-Face - S
# 640x640 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.468 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.762 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.483 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.325 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.687 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.822 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.158 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.426 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.530 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.408 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.743 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.852 per class AP: | class | AP | class | AP | class | AP | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 47.521 | head | 52.127 | hand | 40.412 | | face | 47.326 | | | | | per class AR: | class | AR | class | AR | class | AR | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 54.453 | head | 56.447 | hand | 48.733 | | face | 52.354 | | | | |
-
YOLOX-Body-Head-Hand-Face - M
# 640x640 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.504 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.787 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.531 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.354 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.723 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.859 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.163 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.450 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.556 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.431 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.767 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.880 per class AP: | class | AP | class | AP | class | AP | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 53.512 | head | 53.619 | hand | 44.455 | | face | 50.102 | | | | | per class AR: | class | AR | class | AR | class | AR | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 59.069 | head | 57.530 | hand | 51.307 | | face | 54.691 | | | | |
-
YOLOX-Body-Head-Hand-Face - L
# 640x640 Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.517 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.795 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.546 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.367 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.737 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.876 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.167 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.461 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.565 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.438 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.779 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.894 per class AP: | class | AP | class | AP | class | AP | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 55.231 | head | 54.309 | hand | 46.417 | | face | 50.873 | | | | | per class AR: | class | AR | class | AR | class | AR | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 60.349 | head | 57.803 | hand | 52.661 | | face | 55.386 | | | | |
-
YOLOX-Body-Head-Hand-Face - X
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.526 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.800 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.555 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.374 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.745 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.884 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.166 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.465 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.574 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.446 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.785 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.902 per class AP: | class | AP | class | AP | class | AP | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 56.340 | head | 54.669 | hand | 47.280 | | face | 51.928 | | | | | per class AR: | class | AR | class | AR | class | AR | |:--------|:-------|:--------|:-------|:--------|:-------| | body | 61.122 | head | 58.182 | hand | 53.339 | | face | 56.775 | | | | |
-
Post-Process
Because I add my own post-processing to the end of the model, which can be inferred by TensorRT, CUDA, and CPU, the benchmarked inference speed is the end-to-end processing speed including all pre-processing and post-processing. EfficientNMS in TensorRT is very slow and should be offloaded to the CPU.
-
NMS default parameter
param value note max_output_boxes_per_class 20 Maximum number of outputs per class of one type. 20
indicates that the maximum number of people detected is20
, the maximum number of heads detected is20
, and the maximum number of hands detected is20
. The larger the number, the more people can be detected, but the inference speed slows down slightly due to the larger overhead of NMS processing by the CPU. In addition, as the number of elements in the final output tensor increases, the amount of information transferred between hardware increases, resulting in higher transfer costs on the hardware circuit. Therefore, it would be desirable to set the numerical size to the minimum necessary.iou_threshold 0.40 A value indicating the percentage of occlusion allowed for multiple bounding boxes of the same class. 0.40
is excluded from the detection results if, for example, two bounding boxes overlap in more than 41% of the area. The smaller the value, the more occlusion is tolerated, but over-detection may increase.score_threshold 0.25 Bounding box confidence threshold. Specify in the range of 0.00
to1.00
. The larger the value, the stricter the filtering and the lower the NMS processing load, but in exchange, all but bounding boxes with high confidence values are excluded from detection. This is a parameter that has a very large percentage impact on NMS overhead. -
Change NMS parameters
Use PINTO0309/sam4onnx to rewrite the
NonMaxSuppression
parameter in the ONNX file.For example,
pip install onnxsim==0.4.33 \ && pip install -U simple-onnx-processing-tools \ && pip install -U onnx \ && python -m pip install -U onnx_graphsurgeon \ --index-url https://pypi.ngc.nvidia.com ### max_output_boxes_per_class ### Example of changing the maximum number of detections per class to 100. sam4onnx \ --op_name main01_nonmaxsuppression11 \ --input_onnx_file_path yolox_s_body_head_hand_post_0299_0.4983_1x3x256x320.onnx \ --output_onnx_file_path yolox_s_body_head_hand_post_0299_0.4983_1x3x256x320.onnx \ --input_constants main01_max_output_boxes_per_class int64 [100] ### iou_threshold ### Example of changing the allowable area of occlusion to 20%. sam4onnx \ --op_name main01_nonmaxsuppression11 \ --input_onnx_file_path yolox_s_body_head_hand_post_0299_0.4983_1x3x256x320.onnx \ --output_onnx_file_path yolox_s_body_head_hand_post_0299_0.4983_1x3x256x320.onnx \ --input_constants main01_iou_threshold float32 [0.20] ### score_threshold ### Example of changing the bounding box score threshold to 15%. sam4onnx \ --op_name main01_nonmaxsuppression11 \ --input_onnx_file_path yolox_s_body_head_hand_post_0299_0.4983_1x3x256x320.onnx \ --output_onnx_file_path yolox_s_body_head_hand_post_0299_0.4983_1x3x256x320.onnx \ --input_constants main01_score_threshold float32 [0.15]
-
Post-processing structure
PyTorch alone cannot generate this post-processing.
-
-
INT8 quantization (TexasInstruments/YOLOX-ti-lite)
In my experience, YOLOX has a very large accuracy degradation during quantization due to its structure. The reasons for this and the workaround are examined in detail by TexasInstruments. I have summarized the main points below on how to minimize accuracy degradation during quantization through my own practice. I just put into practice what TexasInstruments suggested, but the degrade in accuracy during quantization was extremely small. Note, however, that the results of the Float16 mixed-precision training before quantization are significantly degraded in accuracy due to the change in activation function to
ReLU
and many other workarounds, as well as the completely different data sets being benchmarked.
If this work has contributed in any way to your research or business, I would be happy to be cited in your literature.
@software{YOLOX-Body-Head-Hand-Face,
author={Katsuya Hyodo},
title={Lightweight human detection model generated using a high-quality human dataset (Body-Head-Hand-Face)},
url={https://github.com/PINTO0309/PINTO_model_zoo/tree/main/434_YOLOX-Body-Head-Hand-Face},
year={2024},
month={1},
doi={10.5281/zenodo.10229410},
}
I am very grateful for their excellent work.
-
COCO-Hand
https://vision.cs.stonybrook.edu/~supreeth/
@article{Hand-CNN, title={Contextual Attention for Hand Detection in the Wild}, author={Supreeth Narasimhaswamy and Zhengwei Wei and Yang Wang and Justin Zhang and Minh Hoai}, booktitle={International Conference on Computer Vision (ICCV)}, year={2019}, url={https://arxiv.org/pdf/1904.04882.pdf} }
-
YOLOX
https://github.com/Megvii-BaseDetection/YOLOX
@article{yolox2021, title={YOLOX: Exceeding YOLO Series in 2021}, author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian}, journal={arXiv preprint arXiv:2107.08430}, year={2021} }
-
YOLOX-ti-lite
-
yolox-ti-lite_tflite
-
YOLOX-Colaboratory-Training-Sample
高橋かずひと https://github.com/Kazuhito00
https://github.com/Kazuhito00/YOLOX-Colaboratory-Training-Sample
- Synthesize and retrain the dataset to further improve model performance. CD-COCO: Complex Distorted COCO database for Scene-Context-Aware computer vision
@INPROCEEDINGS{10323035, author={Beghdadi, Ayman and Beghdadi, Azeddine and Mallem, Malik and Beji, Lotfi and Cheikh, Faouzi Alaya}, booktitle={2023 11th European Workshop on Visual Information Processing (EUVIP)}, title={CD-COCO: A Versatile Complex Distorted COCO Database for Scene-Context-Aware Computer Vision}, year={2023}, volume={}, number={}, pages={1-6}, doi={10.1109/EUVIP58404.2023.10323035} }