-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try working around the processor registration bugs #36184
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Hello could you check #36194 :) |
Thanks for pointing that out @JJJYmmm! I'll use your code as a test case for this PR |
2cb6999
to
d6e1693
Compare
38c1265
to
41876c3
Compare
@JJJYmmm I think this should be ready at this point, can you update your code sample? The Qwen2_5_ImageProcessor has been refactored, so you might need to tweak it a bit before we can test this! |
@Rocketknight1 Yes, they simply reused the Qwen2VLImageProcessor for simplicity. Below is the updated test code: from transformers import Qwen2_5_VLProcessor, Qwen2VLImageProcessor, Qwen2_5_VLForConditionalGeneration, Qwen2_5_VLConfig
class NewConfig(Qwen2_5_VLConfig):
model_type = "new_model"
class NewProcessor(Qwen2_5_VLProcessor):
image_processor_class = "NewImageProcessor"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
class NewImageProcessor(Qwen2VLImageProcessor):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
class NewModel(Qwen2_5_VLForConditionalGeneration):
config_class = NewConfig
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
from transformers import AutoModel, AutoImageProcessor, AutoProcessor
AutoImageProcessor.register(NewModel.config_class, NewImageProcessor)
AutoProcessor.register(NewModel.config_class, NewProcessor)
AutoModel.register(NewModel.config_class, NewModel)
if __name__ == "__main__":
processor = NewProcessor.from_pretrained("preprocessor_config.json") the preprocessor_config.json is below {
"min_pixels": 3136,
"max_pixels": 1003520,
"patch_size": 14,
"temporal_patch_size": 2,
"merge_size": 2,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"image_processor_type": "NewImageProcessor",
"processor_class": "NewProcessor"
} I just test the
File "/home/jjjymmm/anaconda3/envs/hf/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1861, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers has no attribute NewImageProcessor
File "/home/jjjymmm/anaconda3/envs/hf/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1861, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers has no attribute auto |
thanks for the updated test code @JJJYmmm, I'll poke around with it! |
ee7fe74
to
462def9
Compare
cc @ArthurZucker @Cyrilvallez this should be ready for core maintainer review! This PR fixes an issue where a custom code class (like a This causes loads of problems, and you can see workarounds for it in several custom code models on the Hub. For example, Phi manually sets The fix we implement in this PR is to also check the |
Hey, thanks @Rocketknight1! Looks sound to me, could you just make sure to rebase your branch before final approval? I see the tip is just after #35213 was merged, which actually causes all tests to be skipped.... It was corrected in the meantime, so if everything is green after rebase we'll be sure everything is working correctly! 🤗 |
2b13c8c
to
ac8fee0
Compare
ac8fee0
to
bb47788
Compare
cc @Cyrilvallez rebased and tests are passing! |
gentle ping @Cyrilvallez for the final approval! |
# Conflicts: # src/transformers/processing_utils.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, LGTM! Thanks a lot, should make the process of releasing custom code much easier for multimodality!!
Draft PR for now, will write a full desc when it's ready!
Update: Posted the full desc in a comment below
Fixes #34307