-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLAVA Configuration #737
Comments
You can see in Llama.Unitest.csproj the URLs of the models used in the example and UnitTest: https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/resolve/main/llava-v1.6-mistral-7b.Q3_K_XS.gguf You will have both files in any vision model. Example: |
Ah, thank you. So both models can be found at Huggingface. That's something completely new to me, usually I'm using just a single model^^' |
Yes, you should download both files from the model you choose to use. Normally you will have several quantized models and one projection model Llava is using CLIP with an MML Projection (mmproj). You can find the details in this paper: |
Maybe some documentations are necessary. :D |
Description
I have difficulties to figure out, how to correctly config the LLava example.
First I Initialized the backend with a path to libllama.dll and llava_shared.dll in
NativeLibraryConfig.Instance.WithLibrary(llamaPath, llavaPath);
Then I tried to Implement something like shown in this example. I don't understand where to find the suitable Models I need for
modelPath, I belive is the model, I can download here. But what is a clipModel, and where can I get it?
The text was updated successfully, but these errors were encountered: