Skip to content

Releases: huggingface/optimum-intel

v1.16.1: Patch release

25 Apr 08:09
Compare
Choose a tag to compare

v1.16.0: OpenVINO config, SD hybrid quantization

25 Mar 11:56
Compare
Choose a tag to compare

Add hybrid quantization for Stable Diffusion pipelines by @l-bat in #584

from optimum.intel import OVStableDiffusionPipeline, OVWeightQuantizationConfig

model_id = "echarlaix/stable-diffusion-v1-5-openvino"
quantization_config = OVWeightQuantizationConfig(bits=8, dataset="conceptual_captions")
model = OVStableDiffusionPipeline.from_pretrained(model_id, quantization_config=quantization_config)

Add openvino export configs by @eaidova in #568

Enabling OpenVINO export for the following architectures enabled : Mixtral, ChatGLM, Baichuan, MiniCPM, Qwen, Qwen2, StableLM

Add support for export and inference for StarCoder2 models by @eaidova in #619

v1.15.2: Patch release

22 Feb 17:20
Compare
Choose a tag to compare

v1.15.1: Patch release

21 Feb 15:29
Compare
Choose a tag to compare
  • Relax dependency on accelerate and datasets in OVQuantizer by @eaidova in #547

  • Disable compilation before applying 4-bit weight compression by @AlexKoff88 in #569

  • Update Transformers dependency requirements by @echarlaix in #571

v1.15.0: OpenVINO Tokenizers, quantization configuration

19 Feb 17:53
Compare
Choose a tag to compare
from diffusers import StableDiffusionPipeline
from optimum.exporters.openvino import export_from_model

model_id = "runwayml/stable-diffusion-v1-5"
model = StableDiffusionPipeline.from_pretrained(model_id)

export_from_model(model, output="ov_model", task="stable-diffusion")

v1.14.0: IPEX models

31 Jan 17:15
Compare
Choose a tag to compare

IPEX models

from optimum.intel import IPEXModelForCausalLM
from transformers import AutoTokenizer, pipeline

model_id = "Intel/q8_starcoder"
model = IPEXModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
results = pipe("He's a dreadful magician and")

Fixes

  • Fix position_ids initialization for first inference of stateful models by @eaidova in #532
  • Relax requirements to have registered normalized config for decoder models #537 by @eaidova in #537

v1.13.0: 4-bit quantization, stateful models, Whisper

25 Jan 16:48
Compare
Choose a tag to compare

OpenVINO

Weight only 4-bit quantization

optimum-cli export openvino --model gpt2 --weight-format int4_sym_g128 ov_model

Stateful

New architectures

Whisper

  • Add support for export and inference for whisper models by @eaidova in #470

v1.12.4: Patch release

22 Jan 14:08
Compare
Choose a tag to compare

v1.12.3: Patch release

04 Jan 17:25
Compare
Choose a tag to compare

v1.12.2: Patch release

14 Dec 19:48
Compare
Choose a tag to compare