Skip to content

Releases: intel/neural-compressor

Intel® Low Precision Optimization Tool v1.3 Release

16 Apr 14:58
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.3 release is featured by:

  1. FP32 optimization & auto-mixed precision (BF16/FP32) for TensorFlow
  2. Dynamic quantization support for PyTorch
  3. ONNX Runtime v1.7 support
  4. Configurable benchmarking support (multi-instances, warmup, etc.)
  5. Multiple batch size calibration & mAP metrics for object detection models
  6. Experimental user facing APIs for better usability
  7. Various HuggingFace models support

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.2.1 Release

02 Apr 14:53
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.2.1 release is featured by:

  1. user-facing APIs backward compatibility with v1.1 and v1.0.
  2. refined experimental user-facing APIs for better out-of-box experience.

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.2 Release

12 Mar 15:31
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.2 release is featured by:

  • Broad TensorFlow model type support
  • operator-wise quantization scheme for ONNX RT
  • MSE driven tuning for metric-free use cases
  • UX improvement, including UI web server preview support
  • More key model supports

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.1 Release

31 Dec 13:41
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.1 release is featured by:

  • New backends (PyTorch/IPEX, ONNX Runtime) backend preview support
  • Add built-in industry dataset/metric and custom registration
  • Preliminary input/output node auto-detection on TensorFlow models
  • New INT8 quantization recipes: bias correction and label balance

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.0 Release

30 Oct 15:24
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.0 release is featured by:

  • Refined user facing APIs for best OOB.
  • Add TPE tuning strategies (Experimental).
  • Pruning POC support on PyTorch
  • TensorBoard POC support for tuning analysis.
  • Built-in INT8/Dummy dataloader Support.
  • Built-in Benchmarking support.
  • Tuning history for strategy finetune.
  • Support TF Keras and checkpoint model type as input.

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15UP1
  • PyTorch 1.5.0+cpu
  • MxNet 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lp-opt-tool.git $ git clone https://github.com/intel/lp-opt-tool.git
Binary Pip https://pypi.org/project/ilit $ pip install ilit
Binary Conda https://anaconda.org/intel/ilit $ conda install ilit -c intel

Contact:

Please feel free to contact ilit.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.0 Beta Release

31 Aug 10:16
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.0 beta release is featured by:

  • Built-in dataloaders and evaluators
  • Add random and exhaustive tuning strategies
  • Mix precision tuning support on TensorFlow (INT8/BF16/FP32)
  • Quantization-aware training POC support on Pytorch
  • TensorFlow mainstream version support, including 1.15.2, 1.15UP1 and 2.1.0
  • 50+ models validated

Supported Models:

TensorFlow Model Category
ResNet50 V1 Image Recognition
ResNet50 V1.5 Image Recognition
ResNet101 Image Recognition
Inception V1 Image Recognition
Inception V2 Image Recognition
Inception V3 Image Recognition
Inception V4 Image Recognition
ResNetV2_50 Image Recognition
ResNetV2_101 Image Recognition
ResNetV2_152 Image Recognition
Inception ResNet V2 Image Recognition
SSD ResNet50 V1 Object Detection
Wide & Deep Recommendation
VGG16 Image Recognition
VGG19 Image Recognition
Style_transfer Style Transfer
PyTorch Model Category
BERT-Large RTE Language Translation
BERT-Large QNLI Language Translation
BERT-Large CoLA Language Translation
BERT-Base SST-2 Language Translation
BERT-Base RTE Language Translation
BERT-Base STS-B Language Translation
BERT-Base CoLA Language Translation
BERT-Base MRPC Language Translation
DLRM Recommendation
BERT-Large MRPC Language Translation
ResNext101_32x8d Image Recognition
BERT-Large SQUAD Language Translation
ResNet50 V1.5 Image Recognition
ResNet18 Image Recognition
Inception V3 Image Recognition
YOLO V3 Object Detection
Peleenet Image Recognition
ResNest50 Image Recognition
SE_ResNext50_32x4d Image Recognition
ResNet50 V1.5 QAT Image Recognition
ResNet18 QAT Image Recognition
MxNet Model Category
ResNet50 V1 Image Recognition
MobileNet V1 Image Recognition
MobileNet V2 Image Recognition
SSD-ResNet50 Object Detection
SqueezeNet V1 Image Recognition
ResNet18 Image Recognition
Inception V3 Image Recognition

Known Issues:

  • TensorFlow ResNet50 v1.5 int8 model will crash on TensorFlow 1.15 UP1 branch

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0 and 1.15UP1
  • PyTorch 1.5
  • MxNet 1.6

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lp-opt-tool.git $ git clone https://github.com/intel/lp-opt-tool.git
Binary Pip https://pypi.org/project/ilit $ pip install ilit
Binary Conda https://anaconda.org/intel/ilit $ conda config --add channels intel $ conda install ilit

Contact:

Please feel free to contact ilit.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Toolkit (iLiT) v1.0 Alpha Release

11 Aug 12:14
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool (iLiT) is an open-sourced python library which is intended to deliver a unified low-precision inference solution cross multiple Intel optimized DL frameworks on both CPU and GPU. It supports automatic accuracy-driven tuning strategies, along with additional objectives like performance, model size, or memory footprint. It also provides the easy extension capability for new backends, tuning strategies, metrics and objectives.

Feature List:

  • Unified low precision quantization interface cross multiple Intel optimized frameworks (TensorFlow, PyTorch, and MXNet)
  • Built-in tuning strategies, including Basic, Bayesian, and MSE
  • Built-in evaluation metrics, including TopK (image classification), F1 (NLP), and CocoMAP (object detection)
  • Built-in tuning objectives, including Performance, ModelSize, and Footprint
  • Extensible API design to add new strategy, framework backend, metric, and objective
  • KL-divergence calibration for TensorFlow and MXNet
  • Tuning process resume from certain checkpoint

Supported Models:

Model Framework Model Framework Model Framework
ResNet50 V1 MXNet BERT-Large RTE PyTorch ResNet18 PyTorch
MobileNet V1 MXNet BERT-Large QNLI PyTorch ResNet50 V1 TensorFlow
MobileNet V2 MXNet BERT-Large CoLA PyTorch ResNet50 V1.5 TensorFlow
SSD-ResNet50 MXNet BERT-Base SST-2 PyTorch ResNet101 TensorFlow
SqueezeNet V1 MXNet BERT-Base RTE PyTorch Inception V1 TensorFlow
ResNet18 MXNet BERT-Base STS-B PyTorch Inception V2 TensorFlow
Inception V3 MXNet BERT-Base CoLA PyTorch Inception V3 TensorFlow
DLRM PyTorch BERT-Base MRPC PyTorch Inception V4 TensorFlow
BERT-Large MRPC PyTorch ResNet101 PyTorch Inception ResNet V2 TensorFlow
BERT-Large SQUAD PyTorch ResNet50 V1.5 PyTorch SSD ResNet50 V1 TensorFlow

Known Issues:

  • Statistics collection for KL algorithm is slow in TensorFlow due to lack of tensor inspector APIs
  • MSE tuning strategy is not supported in PyTorch

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • TensorFlow 1.15, 2.0 and 2.1
  • PyTorch 1.5
  • MxNet 1.6

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lp-opt-tool.git $ git clone https://github.com/intel/lp-opt-tool.git
Binary Pip https://pypi.org/project/ilit $ pip install ilit
Binary Conda https://anaconda.org/intel/ilit $ conda config --add channels intel $ conda install ilit

Contact:

Please feel free to contact ilit.maintainers@intel.com, if you get any questions.