v1.9.0: OpenVINO models improvements, TorchScript export, INC quantized SD pipeline
OpenVINO and NNCF
- Ensure compatibility for OpenVINO
v2023.0
by @jiwaszki in #265 - Add Stable Diffusion quantization example by @AlexKoff88 in #294 #304 #326
- Enable decoder quantized models export to leverage cache by @echarlaix in #303
- Set height and width during inference for static models Stable Diffusion models by @echarlaix in #308
- Set batch size to 1 by default for Wav2Vec2 for NNCF compatibility
v2.5.0
@ljaljushkin in #312 - Ensure compatibility for NNCF
v2.5
by @ljaljushkin in #314 - Fix OVModel for BLOOM architecture by @echarlaix in #340
- Add SD OV model height and width attribute and fix export for
torch>=v2.0.0
by @eaidova in #342
Intel Neural Compressor
- Add
TSModelForCausalLM
to enable TorchScript export, loading and inference for causal lm models by @echarlaix in #283 - Remove INC deprecated classes by @echarlaix in #293
- Enable IPEX model inference for text generation task by @jiqing-feng in #227 #300
- Add
INCStableDiffusionPipeline
to enable INC quantized Stable Diffusion model loading by @echarlaix in #305 - Enable the possibility to provide a quantization function and not a calibration dataset during INC static PTQ by @PenghuiCheng in #309
- Fix
INCSeq2SeqTrainer
evaluation step by @AbhishekSalian in #335 - Fix
INCSeq2SeqTrainer
padding step by @echarlaix in #336
Full Changelog: https://github.com/huggingface/optimum-intel/commits/v1.9.0