A Benchmark for VQA prompt sensitivity
-
Updated
Jul 26, 2024 - Python
A Benchmark for VQA prompt sensitivity
Contains code and documentation for our VANE-Bench paper.
🔥 Official Benchmark Toolkits for "Visual Haystacks: Answering Harder Questions About Sets of Images"
"Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"
This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"
The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.
A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo
Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"
[ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions
A minimal codebase for finetuning large multimodal models, supporting llava-1.5, qwen-vl, llava-interleave, llava-next-video, phi3-v etc.
[ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models
This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
An open-source implementation for training LLaVA-NeXT.
Embed arbitrary modalities (images, audio, documents, etc) into large language models.
The official evaluation suite and dynamic data release for MixEval.
Open Platform for Embodied Agents
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
A Framework of Small-scale Large Multimodal Models
LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills
Add a description, image, and links to the large-multimodal-models topic page so that developers can more easily learn about it.
To associate your repository with the large-multimodal-models topic, visit your repo's landing page and select "manage topics."