Skip to content

Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.

License

Notifications You must be signed in to change notification settings

intellinjun/neural-compressor

 
 

Repository files navigation

Introduction to Intel® Neural Compressor

Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool) is an open-source Python library running on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep learning frameworks for popular network compression technologies, such as quantization, pruning, knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help user quickly find out the best quantized model. It also implements different weight pruning algorithms to generate pruned model with predefined sparsity goal and supports knowledge distillation to distill the knowledge from the teacher model to the student model. Intel® Neural Compressor has been one of the critical AI software components in Intel® oneAPI AI Analytics Toolkit.

Note GPU support is under development.

Visit the Intel® Neural Compressor online document website at: https://intel.github.io/neural-compressor.

Installation

Prerequisites

  • Python version: 3.7 or 3.8 or 3.9 or 3.10

Install on Linux

# install stable version from pip
pip install neural-compressor

# install nightly version from pip
pip install -i https://test.pypi.org/simple/ neural-compressor

# install stable version from from conda
conda install neural-compressor -c conda-forge -c intel 

More installation methods can be found at Installation Guide.

Getting Started

  • Quantization with Python API
# A TensorFlow Example
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
import tensorflow as tf
from neural_compressor.experimental import Quantization, common
tf.compat.v1.disable_eager_execution()
quantizer = Quantization()
quantizer.model = './mobilenet_v1_1.0_224_frozen.pb'
dataset = quantizer.dataset('dummy', shape=(1, 224, 224, 3))
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.fit()
  • Quantization with GUI
# An ONNX Example
pip install onnx==1.9.0 onnxruntime==1.10.0 onnxruntime-extensions
# Prepare fp32 model
wget https://github.com/onnx/models/blob/main/vision/classification/resnet/model/resnet50-v1-12.onnx
# Start GUI
inc_bench
Architecture

System Requirements

Intel® Neural Compressor supports systems based on Intel 64 architecture or compatible processors, specially optimized for the following CPUs:

  • Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, and Icelake)
  • Future Intel Xeon Scalable processor (code name Sapphire Rapids)

Validated Software Environment

  • OS version: CentOS 8.4, Ubuntu 20.04
  • Python version: 3.7, 3.8, 3.9, 3.10
Framework TensorFlow Intel TensorFlow PyTorch IPEX ONNX Runtime MXNet
Version 2.8.0
2.7.0
2.6.2
2.8.0
2.7.0
1.15.0UP3
1.11.0+cpu
1.10.0+cpu
1.9.0+cpu
1.11.0
1.10.0
1.9.0
1.10.0
1.9.0
1.8.0
1.8.0
1.7.0
1.6.0

Validated Models

Intel® Neural Compressor validated 420+ examples with performance speedup geomean 2.2x and up to 4.2x on VNNI while minimizing the accuracy loss. More details for validated models are available here.

Architecture

Documentation

Overview
Infrastructure Tutorial Examples GUI APIs
Intel oneAPI AI Analytics Toolkit AI and Analytics Samples
Basic API
Transform Dataset Metric Objective
Deep Dive
Quantization Pruning Knowledge Distillation Mixed precision
Benchmarking Distributed Training Model Conversion TensorBoard
Advanced Topics
Adaptor Strategy

Selected Publications

View the full publication list.

Additional Content

Hiring

We are hiring. Please send your resume to inc.maintainers@intel.com if you have interests in model compression techniques.

About

Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 77.3%
  • C++ 17.9%
  • TypeScript 2.5%
  • HTML 1.5%
  • SCSS 0.5%
  • CMake 0.2%
  • Other 0.1%