-
-
Notifications
You must be signed in to change notification settings - Fork 86
Issues: Blaizzy/mlx-vlm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Gemma 3 models do not see the image when the prompt is too long
#242
opened Mar 12, 2025 by
asmeurer
Add FastAPI server
enhancement
New feature or request
good first issue
Good for newcomers
#241
opened Mar 12, 2025 by
Blaizzy
Any speed reference compare with candle or llama.cpp with Qwen2.5 VL 4B?
#240
opened Mar 12, 2025 by
MonolithFoundation
Ensure backwards compatibility with transformers
good first issue
Good for newcomers
#230
opened Mar 6, 2025 by
Blaizzy
Add support for Ovis 2 ?
enhancement
New feature or request
#212
opened Feb 22, 2025 by
alexgusevski
Models should not need to be re-loaded between back-to-back prompts
bug
Something isn't working
#210
opened Feb 21, 2025 by
neilmehta24
Unrecognized image processor in mlx-community/Qwen2.5-VL-7B-Instruct-4bit
#209
opened Feb 21, 2025 by
leoho0722
Error in FineTuning deepseek-vl-7b-chat-8bit
bug
Something isn't working
#187
opened Jan 27, 2025 by
sachinraja13
lava-v1.6: unsupported operand type(s) for //: 'int' and 'NoneType'
#178
opened Jan 10, 2025 by
jrp2014
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.