You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the tokenizer_config is the same as the Llama 3 model, which isn't instructive as to how to pass in images.
Adding a very short snippet of code outlining how to load and inference the model would be a great addition. Same for the video repos.
Ideally inference could be done with either AutoModelForCausalLM or a LlavaLlama model (although I guess that has be created as the LLaVA NeXT Llama 3 model differs?)
The text was updated successfully, but these errors were encountered:
Currently, the tokenizer_config is the same as the Llama 3 model, which isn't instructive as to how to pass in images.
Adding a very short snippet of code outlining how to load and inference the model would be a great addition. Same for the video repos.
Ideally inference could be done with either AutoModelForCausalLM or a LlavaLlama model (although I guess that has be created as the LLaVA NeXT Llama 3 model differs?)
The text was updated successfully, but these errors were encountered: