| Installation | Documentation | Tutorials |
GluonCV provides implementations of the state-of-the-art (SOTA) deep learning models in computer vision.
It is designed for engineers, researchers, and students to fast prototype products and research ideas based on these models. This toolkit offers four main features:
- Training scripts to reproduce SOTA results reported in research papers
- A large number of pre-trained models
- Carefully designed APIs that greatly reduce the implementation complexity
- Community supports
===========================================================================
修改说明
===========================================================================
该版本针对实际部署中的需求进行小改 特别是针对转openvino框架、加速等俩方向
现已修改模型有:
-
yolov3系列
魔改说明:
前传时是否使用nms模块设置为可选,通过use_nms
参数设置, 默认为True
可直接将已经训练好的params加载到当前的yolo3上,且训练时参数use_nms
不影响效果
转换为openvino框架时需要使用use_nms
参数关闭,随后自己在模型结果之后实现/使用nms
pip install nms
即可使用现成的nms函数
通过测试的openvino版本:2020.4.287
通过测试的转换命令:
python3 mo_mxnet.py --input_model yolo3-0000.params --input_shape [1,3,416,416] --data_type FP16
后期会将自己实现的nms算法添加进来 计划添加numpy版本、C++版本
代码示例:get_model("yolo3_darknet53_voc", use_nms=False)
===========================================================================
Check the HD video at Youtube or Bilibili.
Application | Illustration | Available Models |
---|---|---|
Image Classification: recognize an object in an image. |
50+ models, including ResNet, MobileNet, DenseNet, VGG, ... |
|
Object Detection: detect multiple objects with their bounding boxes in an image. |
Faster RCNN, SSD, Yolo-v3 | |
Semantic Segmentation: associate each pixel of an image with a categorical label. |
FCN, PSP, ICNet, DeepLab-v3 | |
Instance Segmentation: detect objects and associate each pixel inside object area with an instance label. |
Mask RCNN | |
Pose Estimation: detect human pose from images. |
Simple Pose | |
Video Action Recognition: recognize human actions in a video. |
TSN, C3D, I3D, P3D, R3D, R2+1D, Non-local, SlowFast | |
GAN: generate visually deceptive images |
WGAN, CycleGAN | |
Person Re-ID: re-identify pedestrians across scenes |
Market1501 baseline |
GluonCV supports Python 2.7/3.5 or later. The easiest way to install is via pip.
The following commands install the stable version of GluonCV and MXNet:
pip install gluoncv --upgrade
pip install -U --pre mxnet -f https://dist.mxnet.io/python/mkl
# if cuda 10.1 is installed
pip install -U --pre mxnet -f https://dist.mxnet.io/python/cu100mkl
The latest stable version of GluonCV is 0.6 and depends on mxnet >= 1.4.0
You may get access to latest features and bug fixes with the following commands which install the nightly build of GluonCV and MXNet:
pip install gluoncv --pre --upgrade
pip install -U --pre mxnet -f https://dist.mxnet.io/python/mkl
# if cuda 10.1 is installed
pip install -U --pre mxnet -f https://dist.mxnet.io/python/cu100mkl
There are multiple versions of MXNet pre-built package available. Please refer to mxnet packages if you need more details about MXNet versions.
GluonCV documentation is available at our website.
All tutorials are available at our website!
Check out how to use GluonCV for your own research or projects.
- For background knowledge of deep learning or CV, please refer to the open source book Dive into Deep Learning. If you are new to Gluon, please check out our 60-minute crash course.
- For getting started quickly, refer to notebook runnable examples at Examples.
- For advanced examples, check out our Scripts.
- For experienced users, check out our API Notes.
If you feel our code or models helps in your research, kindly cite our papers:
@article{gluoncvnlp2020,
author = {Jian Guo and He He and Tong He and Leonard Lausen and Mu Li and Haibin Lin and Xingjian Shi and Chenguang Wang and Junyuan Xie and Sheng Zha and Aston Zhang and Hang Zhang and Zhi Zhang and Zhongyue Zhang and Shuai Zheng and Yi Zhu},
title = {GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {23},
pages = {1-7},
url = {http://jmlr.org/papers/v21/19-429.html}
}
@article{he2018bag,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={He, Tong and Zhang, Zhi and Zhang, Hang and Zhang, Zhongyue and Xie, Junyuan and Li, Mu},
journal={arXiv preprint arXiv:1812.01187},
year={2018}
}
@article{zhang2019bag,
title={Bag of Freebies for Training Object Detection Neural Networks},
author={Zhang, Zhi and He, Tong and Zhang, Hang and Zhang, Zhongyue and Xie, Junyuan and Li, Mu},
journal={arXiv preprint arXiv:1902.04103},
year={2019}
}
@article{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
journal={arXiv preprint},
year={2020}
}