-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Open
Description
Is your feature request related to a problem? Please describe.
Currently I am using Qwen2vl, this is the best vlm model for my project. I hope llama-cpp-python can support this model. I tried to use llama.cpp to build a server, but llama.cpp is not allow to use mm-proj.
Describe the solution you'd like
Support the qwen2vl model and use it like other vlm models.
fanfansoft, Fatemehkiasaveh, wangchuri, helloHKTK, kafendt and 5 moremrhalyang, zhouxihong1, fanfansoft and xpatronumfanfansoft, helloHKTK and xpatronum
Metadata
Metadata
Assignees
Labels
No labels