-
Notifications
You must be signed in to change notification settings - Fork 29.6k
[vlm] fix loading of retrieval VLMs #39242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
run-slow: colpali, colqwen2 |
This comment contains run-slow, running the specified jobs: models: ['models/colpali', 'models/colqwen2'] |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
I wanted to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable fix to me (as they seems to apply the same changes made to VLM)
@zucchini-nlp I observed that the 2 model tests fails in 2 different PRs, but maybe they share the same cause so their fix seems identical here?
For context:
colpali tests are failing after
[VLM] Add base model without head (#37033)
And for colqwen2, it fails after
[qwen] refactor attentions for vision/audio (#38930)
and there is a fix [qwen2-vl] fix vision attention scaling #39043, but that one doesn't fix for colqwen
Hmm, ColQwen for me wasn't failing in a sense that the weights matched when laoding. But the tensors aren't close enough even after model was released. I can check out on runners and see what's the issue. ColQwen shouldn't have the same issue, it was released after the major refactor |
Hi, sorry, I think my memory got messed So that issue was already fixed, but my brain wasn't yet. |
[For maintainers] Suggested jobs to run (before merge) run-slow: colpali, colqwen2 |
What does this PR do?
As per title, reported internally that slow tests are failing. We need to apply same changes as in VLMs to the models that use VLMs in their architecture