A unified codebase for finetuning (full, lora) large multimodal models, supporting llava-1.5, qwen-vl, llava-interleave, llava-next-video, etc.
-
Updated
Jul 20, 2024 - Python
A unified codebase for finetuning (full, lora) large multimodal models, supporting llava-1.5, qwen-vl, llava-interleave, llava-next-video, etc.
Open Platform for Embodied Agents
AI-First Process Automation with Large ([Language (LLMs) / Action (LAMs) / Multimodal (LMMs)] / Visual Language (VLMs)) Models
The official evaluation suite and dynamic data release for MixEval.
A Framework of Small-scale Large Multimodal Models
A Benchmark for VQA prompt sensitivity
This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"
An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions
A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo
[ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models
[ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
"Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"
The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.
Contains code and documentation for our VANE-Bench paper.
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
An open-source implementation of LLaVA-NeXT.
Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"
This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
Embed arbitrary modalities (images, audio, documents, etc) into large language models.
Add a description, image, and links to the large-multimodal-models topic page so that developers can more easily learn about it.
To associate your repository with the large-multimodal-models topic, visit your repo's landing page and select "manage topics."