You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/source/en/model_doc/bert.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ rendered properly in your Markdown viewer.
28
28
29
29
[BERT](https://huggingface.co/papers/1810.04805) is a bidirectional transformer pretrained on unlabeled text to predict masked tokens in a sentence and to predict whether one sentence follows another. The main idea is that by randomly masking some tokens, the model can train on text to the left and right, giving it a more thorough understanding. BERT is also very versatile because its learned language representations can be adapted for other NLP tasks by fine-tuning an additional layer or head.
30
30
31
-
You can find all the original BERT checkpoints under the [BERT collection](https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc).
31
+
You can find all the original BERT checkpoints under the BERT [collection](https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc).
32
32
33
33
> [!TIP]
34
34
> Click on the BERT models in the right sidebar for more examples of how to apply BERT to different language tasks.
The Gemma 3 model was proposed in the [Gemma 3 Techncial Report](https://goo.gle/Gemma3Report)by Google. It is a vision-language model composed by a [SigLIP](siglip) vision encoder and a [Gemma 2](gemma_2) language decoder, linked by a multimodal linear projection. It cuts an image into a fixed number of tokens, in the same way as SigLIP, as long as the image does not exceed certain aspect ratio. For images that exceed the given aspect ratio, it crops the image into multiple smaller patches and concatenates them with the base image embedding. One particularity is that the model uses bidirectional attention on all the image tokens. In addition, the model interleaves sliding window local attention with full causal attention in the language backbone, where each sixth layer is a full causal attention layer.
27
+
[Gemma 3](https://goo.gle/Gemma3Report) is a multimodal model, available in pretrained and instruction-tuned variants, available in 1B, 13B, and 27B parameters. The architecture is mostly the same as the previous Gemma versions. The key differences are alternating 5 local sliding window self-attention layers for every global self-attention layer, support for a longer context length of 128K tokens, and a [SigLip](./siglip) encoder that can "pan & scan" high-resolution images to prevent information in images from disappearing.
23
28
24
-
This model was contributed by [Ryan Mullins](https://huggingface.co/RyanMullins), [Raushan Turganbay](https://huggingface.co/RaushanTurganbay)[Arthur Zucker](https://huggingface.co/ArthurZ), and [Pedro Cuenca](https://huggingface.co/pcuenq).
29
+
The instruction-tuned Gemma 3 model was post-trained with knowledge distillation and reinforcement learning.
25
30
31
+
You can find all the original Gemma 3 checkpoints under the [Gemma 3](https://huggingface.co/collections/meta-llama/llama-2-family-661da1f90a9d678b6f55773b) release.
26
32
27
-
## Usage tips
33
+
> [!TIP]
34
+
> Click on the Gemma 3 models in the right sidebar for more examples of how to apply Gemma to different vision and language tasks.
28
35
36
+
The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class.
29
37
30
-
- For image+text and image-only inputs use `Gemma3ForConditionalGeneration`.
31
-
- For text-only inputs use `Gemma3ForCausalLM` for generation to avoid loading the vision tower.
32
-
- Each sample can contain multiple images, and the number of images can vary between samples. However, make sure to pass correctly batched images to the processor, where each batch is a list of one or more images.
33
-
- The text passed to the processor should have a `<start_of_image>` token wherever an image should be inserted.
34
-
- The processor has its own `apply_chat_template` method to convert chat messages to model inputs. See the examples below for more details on how to use it.
38
+
<hfoptionsid="usage">
39
+
<hfoptionid="Pipeline">
35
40
41
+
```py
42
+
import torch
43
+
from transformers import pipeline
36
44
37
-
### Image cropping for high resolution images
38
-
39
-
The model supports cropping images into smaller patches when the image aspect ratio exceeds a certain value. By default the images are not cropped and only the base image is forwarded to the model. Users can set `do_pan_and_scan=True` to obtain several crops per image along with the base image to improve the quality in DocVQA or similar tasks requiring higher resolution images.
text="<start_of_image> What is shown in this image?"
54
+
)
55
+
```
40
56
41
-
Pan and scan is an inference time optimization to handle images with skewed aspect ratios. When enabled, it improves performance on tasks related to document understanding, infographics, OCR, etc.
57
+
</hfoption>
58
+
<hfoptionid="AutoModel">
42
59
43
-
```python
60
+
```py
61
+
import torch
62
+
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
75
105
76
-
### Single-image Inference
106
+
The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4.
77
107
78
-
```python
79
-
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
108
+
```py
109
+
# pip install torchao
110
+
import torch
111
+
from transformers import TorchAoConfig, Gemma3ForConditionalGeneration, AutoProcessor
80
112
81
-
model_id ="google/gemma-3-4b-it"
82
-
model = Gemma3ForConditionalGeneration.from_pretrained(model_id, device_map="auto")
You can use the VLMs for text-only generation by omitting images in your input. However, you can also load the models in text-only mode as shown below. This will skip loading the vision tower and will save resources when you just need the LLM capabilities.
152
-
```python
153
-
from transformers import AutoTokenizer, Gemma3ForCausalLM
- Use [`Gemma3ForConditionalGeneration`] for image-and-text and image-only inputs.
163
+
- Gemma 3 supports multiple input images, but make sure the images are correctly batched before passing them to the processor. Each batch should be a list of one or more images.
{"type": "text", "text": "You are a helpful assistant."}
174
+
]
175
+
},
176
+
{
177
+
"role": "user",
178
+
"content": [
179
+
{"type": "image", "url": url_cow},
180
+
{"type": "image", "url": url_cat},
181
+
{"type": "text", "text": "Which image is cuter?"},
182
+
]
183
+
},
184
+
]
185
+
```
186
+
- Text passed to the processor should have a `<start_of_image>` token wherever an image should be inserted.
187
+
- The processor has its own [`~ProcessorMixin.apply_chat_template`] method to convert chat messages to model inputs.
188
+
- By default, the images aren't cropped and only the base image is forwarded to the model. In high resolution images or images with non-square aspect ratios, artifacts can result because the vision encoder uses a fixed resolution of 896x896. To prevent these artifacts and improve performance during inference, set `do_pan_and_scan=True` to crop the image into multiple smaller patches and concatenate them with the base image embedding. You can disable pan and scan for faster inference.
189
+
190
+
```diff
191
+
inputs = processor.apply_chat_template(
192
+
messages,
193
+
tokenize=True,
194
+
return_dict=True,
195
+
return_tensors="pt",
196
+
add_generation_prompt=True,
197
+
+do_pan_and_scan=True,
198
+
).to("cuda")
199
+
```
200
+
- For text-only inputs, use [`AutoModelForCausalLM`] instead to skip loading the vision components and save resources.
201
+
202
+
```py
203
+
import torch
204
+
from transformers import AutoModelForCausalLM, AutoTokenizer
205
+
206
+
tokenizer = AutoTokenizer.from_pretrained(
207
+
"google/gemma-3-1b-pt",
208
+
)
209
+
model = AutoModelForCausalLM.from_pretrained(
210
+
"google/gemma-3-1b-pt",
211
+
torch_dtype=torch.bfloat16,
212
+
device_map="auto",
213
+
attn_implementation="sdpa"
214
+
)
215
+
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to("cuda")
Copy file name to clipboardexpand all lines: docs/source/en/model_doc/llama2.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ rendered properly in your Markdown viewer.
28
28
29
29
Llama 2-Chat is trained with supervised fine-tuning (SFT), and reinforcement learning with human feedback (RLHF) - rejection sampling and proximal policy optimization (PPO) - is applied to the fine-tuned model to align the chat model with human preferences.
30
30
31
-
You can find all the original Llama 2 checkpoints under the [Llama 2 Family collection](https://huggingface.co/collections/meta-llama/llama-2-family-661da1f90a9d678b6f55773b).
31
+
You can find all the original Llama 2 checkpoints under the [Llama 2 Family](https://huggingface.co/collections/meta-llama/llama-2-family-661da1f90a9d678b6f55773b) collection.
32
32
33
33
> [!TIP]
34
34
> Click on the Llama 2 models in the right sidebar for more examples of how to apply Llama to different language tasks.
0 commit comments