- https://github.com/sun254/awesome-model-compression-and-acceleration a list of awesome papers on deep model ompression and acceleration
- https://github.com/memoiry/Awesome-model-compression-and-acceleration a list of awesome papers on deep model ompression and acceleration
- https://github.com/ZhishengWang/Embedded-Neural-Network collection of works aiming at reducing model sizes or the ASIC/FPGA accelerator for machine learning
- https://github.com/Ewenwan/MVision/blob/master/CNN/Deep_Compression/readme.md a survey of deep compression
- https://pocketflow.github.io/ homepage of pocketflow
- https://github.com/Tencent/PocketFlow An Automatic Model Compression framework for developing smaller and faster AI applications from Tencent.
- https://github.com/DwangoMediaVillage/keras_compressor Model Compression CLI Tool for Keras.
- https://github.com/TianzhongSong/Model-Compression-Keras cnn compression for keras
- https://github.com/walkerning/compression-tool A small compression tool for caffe model using svd and prune.
- https://github.com/LitLeo/TensorRT_Tutorial tensorrt tutorial
- https://github.com/dusty-nv/jetson-inference Guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
- https://github.com/NVIDIA/tensorrt-laboratory Explore the Capabilities of the TensorRT Platform
- https://github.com/lewes6369/tensorRTWrapper a wrapper for tensorRT net (parser caffe)
- https://github.com/lewes6369/TensorRT-Yolov3 TensorRT for Yolov3
- https://github.com/eric612/TensorRT-Yolov3 TensorRT for Yolov3
- https://github.com/eric612/TensorRT-Yolov3-models Deploy model was trained from mobilenet-yolov3
- https://github.com/TLESORT/YOLO-TensorRT-GIE- an implementation of a trained YOLO neural network used with the TensorRT framework.
- https://github.com/Ghustwb/MobileNet-SSD-TensorRT Accelerate mobileNet-ssd with tensorRT
- https://github.com/chenzhi1992/TensorRT-SSD Use TensorRT API to implement Caffe-SSD, SSD(channel pruning), Mobilenet-SSD
- https://github.com/PKUZHOU/MTCNN_FaceDetection_TensorRT MTCNN C++ implementation with NVIDIA TensorRT Inference accelerator SDK
- https://github.com/tensorlayer/openpose-plus High-Performance and Flexible Pose Estimation Framework using TensorFlow, OpenPose and TensorRT
- https://github.com/csvance/keras-tensorrt-jetson Example of loading a Keras model into TensorRT C++ API https://github.com/zhaozhixu/SqueezeDetTRT SqueezeDet implemented in CUDA&TensorRT
- https://github.com/lyk125/caffe-int8-convert-tools Quantize caffe model to ncnn base on TensorRT 2.0 Int8 calibration tools,which use the KL algorithm to find the suitable threshold to quantize the activions from Float32 to Int8(-128 - 127).
- https://github.com/chengshengchan/model_compression Implementation of model compression with three knowledge distilling or teacher student methods. The basic architecture is teacher-student model.
- https://github.com/antspy/quantized_distillation Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"
- https://github.com/guoxiaolu/model_compression keras implementation of "PRUNING FILTERS FOR EFFICIENT CONVNETS"
- https://github.com/Irtza/Keras_model_compression Model Compression Based on Geoffery Hinton's Logit Regression Method in Keras applied to MNIST 16x compression over 0.95 percent accuracy
- https://github.com/Roll920/ThiNet caffe model of ICCV'17 paper - ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression