Skip to content

qingping209/Awesome-Multimodal-Large-Language-Models

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 

Repository files navigation

Awesome-Multimodal-Large-Language-Models

Awesome

🔥🔥🔥 A curated list of Multimodal Large Language Models (MLLM), including datasets, multimodal instruction tuning, multimodal in-context learning, multimodal chain-of-thought, llm-aided visual reasoning, foundation models, and others.

🔥🔥🔥 This list will be updated in real time.

🔥🔥🔥 A survey paper on MLLM is preparing and will be released soon!

Welcome to join our WeChat group of MLLM communication!

Expected to expire in 6.12


Table of Contents


Awesome Papers

Multimodal Instruction Tuning

Title Venue Date Code Demo
Star
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
arXiv 2023-06-08 Github Demo
Star
MIMIC-IT: Multi-Modal In-Context Instruction Tuning
arXiv 2023-06-08 Github Demo
M3IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning arXiv 2023-06-07 - -
Star
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
arXiv 2023-06-05 Github Demo
Star
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day
arXiv 2023-06-01 Github -
Star
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction
arXiv 2023-05-30 Github Demo
Star
PandaGPT: One Model To Instruction-Follow Them All
arXiv 2023-05-25 Github Demo
Star
ChatBridge: Bridging Modalities with Large Language Model as a Language Catalyst
arXiv 2023-05-25 Github -
Star
Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models
arXiv 2023-05-24 Github Local Demo
Star
DetGPT: Detect What You Need via Reasoning
arXiv 2023-05-23 Github Demo
Star
VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
arXiv 2023-05-18 Github Demo
Star
VisualGLM-6B
- 2023-05-17 Github Local Demo
Star
PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering
arXiv 2023-05-17 Github -
Star
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
arXiv 2023-05-11 Github Local Demo
Star
VideoChat: Chat-Centric Video Understanding
arXiv 2023-05-10 Github Demo
Star
MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
arXiv 2023-05-08 Github Demo
Star
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages
arXiv 2023-05-07 Github -
Star
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
arXiv 2023-04-28 Github Demo
Star
mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
arXiv 2023-04-27 Github Demo
Star
MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models
arXiv 2023-04-20 Github -
Star
Visual Instruction Tuning
arXiv 2023-04-17 GitHub Demo
Star
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
arXiv 2023-03-28 Github Demo
MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning arXiv 2022-12-21 - -

Multimodal In-Context Learning

Title Venue Date Code Demo
Star
MIMIC-IT: Multi-Modal In-Context Instruction Tuning
arXiv 2023-06-08 Github Demo
Star
Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models
arXiv 2023-04-19 Github Demo
Star
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace
arXiv 2023-03-30 Github Demo
Star
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action
arXiv 2023-03-20 Github Demo
Star
Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering
CVPR 2023-03-03 Github -
Star
Visual Programming: Compositional visual reasoning without training
CVPR 2022-11-18 Github Local Demo
Star
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA
AAAI 2022-06-28 Github -
Star
Flamingo: a Visual Language Model for Few-Shot Learning
NeurIPS 2022-04-29 Github Demo
Multimodal Few-Shot Learning with Frozen Language Models NeurIPS 2021-06-25 - -

Multimodal Chain-of-Thought

Title Venue Date Code Demo
Star
EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought
arXiv 2023-05-24 Github -
Let’s Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction arXiv 2023-05-23 - -
Star
Caption Anything: Interactive Image Description with Diverse Multimodal Controls
arXiv 2023-05-04 Github Demo
Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings arXiv 2023-05-03 Coming soon -
Star
Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models
arXiv 2023-04-19 Github Demo
Chain of Thought Prompt Tuning in Vision Language Models arXiv 2023-04-16 Coming soon -
Star
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action
arXiv 2023-03-20 Github Demo
Star
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
arXiv 2023-03-08 Github Demo
Star
Multimodal Chain-of-Thought Reasoning in Language Models
arXiv 2023-02-02 Github -
Star
Visual Programming: Compositional visual reasoning without training
CVPR 2022-11-18 Github Local Demo
Star
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
NeurIPS 2022-09-20 Github -

LLM-Aided Visual Reasoning

Title Venue Date Code Demo
Star
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction
arXiv 2023-05-30 Github Demo
Star
LayoutGPT: Compositional Visual Planning and Generation with Large Language Models
arXiv 2023-05-24 Github -
Star
IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models
arXiv 2023-05-24 Github Local Demo
Star
Caption Anything: Interactive Image Description with Diverse Multimodal Controls
arXiv 2023-05-04 Github Demo
Star
Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models
arXiv 2023-04-19 Github Demo
Star
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace
arXiv 2023-03-30 Github Demo
Star
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action
arXiv 2023-03-20 Github Demo
Star
ViperGPT: Visual Inference via Python Execution for Reasoning
arXiv 2023-03-14 Github Local Demo
Star
ChatGPT Asks, BLIP-2 Answers: Automatic Questioning Towards Enriched Visual Descriptions
arXiv 2023-03-12 Github Local Demo
Star
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
arXiv 2023-03-08 Github Demo
Star
Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners
CVPR 2023-03-03 Github -
Star
PointCLIP V2: Adapting CLIP for Powerful 3D Open-world Learning
CVPR 2022-11-21 Github -
Star
Visual Programming: Compositional visual reasoning without training
CVPR 2022-11-18 Github Local Demo
Star
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
arXiv 2022-04-01 Github -

Foundation Models

Title Venue Date Code Demo
Star
Transfer Visual Prompt Generator across LLMs
arXiv 2023-05-02 Github Demo
GPT-4 Technical Report arXiv 2023-03-15 - -
PaLM-E: An Embodied Multimodal Language Model arXiv 2023-03-06 - Demo
Star
Prismer: A Vision-Language Model with An Ensemble of Experts
arXiv 2023-03-04 Github Demo
Star
Language Is Not All You Need: Aligning Perception with Language Models
arXiv 2023-02-27 Github -
Star
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
arXiv 2023-01-30 Github Demo
Star
VIMA: General Robot Manipulation with Multimodal Prompts
ICML 2022-10-06 Github Local Demo
Star
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
NeurIPS 2022-06-17 Github -

Others

Title Venue Date Code Demo
Can Large Pre-trained Models Help Vision Models on Perception Tasks? arXiv 2023-06-01 Coming soon -
Star
Contextual Object Detection with Multimodal Large Language Models
arXiv 2023-05-29 Github Demo
Star
Generating Images with Multimodal Language Models
arXiv 2023-05-26 Github -
Star
On Evaluating Adversarial Robustness of Large Vision-Language Models
arXiv 2023-05-26 Github -
Star
Evaluating Object Hallucination in Large Vision-Language Models
arXiv 2023-05-17 Github -
Star
Grounding Language Models to Images for Multimodal Inputs and Outputs
ICML 2023-01-31 Github Demo

Awesome Datasets

Datasets of Pre-Training for Alignment

Name Paper Type Modalities
MS-COCO Microsoft COCO: Common Objects in Context Caption Image-Text
SBU Captions Im2Text: Describing Images Using 1 Million Captioned Photographs Caption Image-Text
Conceptual Captions Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning Caption Image-Text
LAION-400M LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs Caption Image-Text
VG Captions Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations Caption Image-Text
Flickr30k Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models Caption Image-Text
AI-Caps AI Challenger : A Large-scale Dataset for Going Deeper in Image Understanding Caption Image-Text
Wukong Captions Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark Caption Image-Text
Youku-mPLUG Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for Pre-training and Benchmarks Caption Video-Text
MSR-VTT MSR-VTT: A Large Video Description Dataset for Bridging Video and Language Caption Video-Text
Webvid10M Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval Caption Video-Text
WavCaps WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal Research Caption Audio-Text
AISHELL-1 AISHELL-1: An open-source Mandarin speech corpus and a speech recognition baseline ASR Audio-Text
AISHELL-2 AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale ASR Audio-Text
VSDial-CN X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages ASR Image-Audio-Text

Datasets of Multimodal Instruction Tuning

Name Paper Link Notes
Video-ChatGPT Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models Link 100K high-quality video instruction dataset
MIMIC-IT MIMIC-IT: Multi-Modal In-Context Instruction Tuning Coming soon Multimodal in-context instruction tuning
M3IT M3IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning Link Large-scale, broad-coverage multimodal instruction tuning dataset
LLaVA-Med LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day Coming soon A large-scale, broad-coverage biomedical instruction-following dataset
GPT4Tools GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction Link Tool-related instruction datasets
MULTIS ChatBridge: Bridging Modalities with Large Language Model as a Language Catalyst Coming soon Multimodal instruction tuning dataset covering 16 multimodal tasks
DetGPT DetGPT: Detect What You Need via Reasoning Link Instruction-tuning dataset with 5000 images and around 30000 query-answer pairs
PMC-VQA PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering Coming soon Large-scale medical visual question-answering dataset
VideoChat VideoChat: Chat-Centric Video Understanding Link Video-centric multimodal instruction dataset
X-LLM X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages Link Chinese multimodal instruction dataset
OwlEval mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality Link Dataset for evaluation on multiple capabilities
cc-sbu-align MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models Link Multimodal aligned dataset for improving model's usability and generation's fluency
LLaVA-Instruct-150K Visual Instruction Tuning Link Multimodal instruction-following data generated by GPT
MultiInstruct MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning - The first multimodal instruction tuning benchmark dataset

Datasets of In-Context Learning

Name Paper Link Notes
MIMIC-IT MIMIC-IT: Multi-Modal In-Context Instruction Tuning Coming soon Multimodal in-context instruction dataset

Datasets of Multimodal Chain-of-Thought

Name Paper Link Notes
EgoCOT EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought Coming soon Large-scale embodied planning dataset
VIP Let’s Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction Coming soon An inference-time dataset that can be used to evaluate VideoCOT
ScienceQA Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering Link Large-scale multi-choice dataset, featuring multimodal science questions and diverse domains

About

Latest Papers and Datasets on Multimodal Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published