diff --git a/README.md b/README.md index 3cf77e28855..21aff495bb4 100644 --- a/README.md +++ b/README.md @@ -219,7 +219,7 @@ inputs = tokenizer(prompt, return_tensors="pt").input_ids model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True) outputs = model.generate(inputs) ``` -You can also load GGUF format model from Huggingface, we only support Q4_0 gguf format for now. +You can also load GGUF format model from Huggingface, we only support Q4_0/Q5_0/Q8_0 gguf format for now. ```python from transformers import AutoTokenizer from intel_extension_for_transformers.transformers import AutoModelForCausalLM