Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add prompt format and sample inference code to HF model repos #18

Open
RonanKMcGovern opened this issue May 16, 2024 · 0 comments
Open

Comments

@RonanKMcGovern
Copy link

Currently, the tokenizer_config is the same as the Llama 3 model, which isn't instructive as to how to pass in images.

Adding a very short snippet of code outlining how to load and inference the model would be a great addition. Same for the video repos.

Ideally inference could be done with either AutoModelForCausalLM or a LlavaLlama model (although I guess that has be created as the LLaVA NeXT Llama 3 model differs?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant