-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Issues: haotian-liu/LLaVA
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Feature request] Compatibility between zero3 and pretrain_mm_mlp_adapter
#1878
opened May 13, 2025 by
ZarkPanda
[Usage] CUDA out-of-memory when running plain LLaVA inference on RTX 4090
#1877
opened May 12, 2025 by
omerbgu1
"There was a problem with multiple GPU inference in last year's LLaVA 1.6 — any updates?"
#1870
opened Apr 15, 2025 by
fabio1shot
[Question] What is huggingface address for LLaVA-v1.5-LLaMA3-8B model?
#1869
opened Apr 14, 2025 by
qm-intel
[Usage] ImportError: cannot import name 'KeywordsStoppingCriteria' from 'llava.model.utils'
#1861
opened Mar 25, 2025 by
kky677
Proposal: Integrating Sparse Autoencoders (SAEs) for LLaVA Interpretability
#1852
opened Mar 16, 2025 by
jmanhype
[Usage] from .model.language_model.llava_llama import LlavaLlamaForCausalLM is not work
#1849
opened Mar 10, 2025 by
GoodStarLink
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.