-
Notifications
You must be signed in to change notification settings - Fork 28.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UnboundLocalError: cannot access local variable 'images_list'
when using Gemma 3 AutoProcessor with use_fast=True
#36739
Comments
@Zebz13 oops, that is a typo. Supposed to be |
@zucchini-nlp can open up a PR if it makes stuff easier. transformers/src/transformers/models/gemma3/image_processing_gemma3_fast.py Lines 280 to 288 in 6f3e0b6
And like you've said, it's due to testing only for
|
@Zebz13 yeah, as long as the naming is consistent, it is not a big deal which one we use. Will be happy to review the PR 🤗 BTW, can you also add a small test in |
Not sure but I think this fix should be added to v4.49.0-Gemma-3 (I don't see the PR under the tag) |
@neuromechanist we are preparing a release soon, prob tomorrow. So it will be in the release |
System Info
transformers
version: 4.50.0.dev0Who can help?
@ArthurZucker
Terribly sorry if it's the wrong person! I hope this passes as Text Model.
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Used base code given on HF for Gemma 3(modified to local path): https://huggingface.co/google/gemma-3-4b-it#running-the-model-on-a-singlemulti-gpu
Added use_fast=True to the AutoProcessor arguments.
Reason (I think):
images_list
variable is declared underif do_pan_and_scan
. Ifdo_pan_and_scan
is not enabled,images_list
is not available and hence it will error out.Same variable is used for
group_images_by_shape
in Line No: 294.Lines:
transformers/src/transformers/models/gemma3/image_processing_gemma3_fast.py
Lines 280 to 294 in 6f3e0b6
Since images variable is being passed as
List[List["torch.Tensor"]]
, passingimages_list = image_list
in else case can fix the issue. There might be a better way for fixing this. Local fix which I'm using:Got the idea from got_ocr2:
transformers/src/transformers/models/got_ocr2/image_processing_got_ocr2_fast.py
Line 183 in 6f3e0b6
Expected behavior
Returns output without failure.
Sample (clips out due to token limit):
The text was updated successfully, but these errors were encountered: