Skip to content

THUNLP-MT/Brote

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

👀 Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion

🌐 Homepage | 📖 arXiv | 🤗 Models

This repo includes codes and examples for paper Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion.

Activities

  1. [2024-05-16] This paper is accepted by ACL 2024 (main conference). Information for our training data is updated.
  2. [2024-04-18] Code and cases for data generation released. The generated data are used for pretraining.
  3. [2024-03-18] Brote-IM-XXL model released, please download from this link.
  4. [2024-02-26] Project released.

Framework

We propose a paradigm Browse and Concentrate (Brote) for incorporating multimodal context before feeding features into the LLM, together with two approaches to implement our paradigm, Brote-EX and Brote-IM. The model structures are shown in the following figure.

Image

Instructions For Training and Inference

Data

Please refer to the data format described in MIC.

  1. Data for pretraining.

We create a dataset of 56k fewshot data samples, resulting in 191k training instances (one image per instance). These instances are supposed to contain question-aware and cross-image information. The data construction pipeline is illustrated in the following figure.

Image

If you want plan to try our constructed pretraining data, please create an issue here. We will contact you ASAP.

  1. Data for finetuning.

We sampled about 500k data from MIC for model finetuning.

Environment

pip install -r requirements.txt

Training

coming soon

Inference

Please refer to the test.py file; files under the model dir are for test only, and will be updated soon for training.

To run the test script (ensure the required libraries are properly installed):

export CUDAID='please set you cuda id here'
export TASKID='please set the case id (from 1 to 5), or use the string 'all'(lowercase)'
CUDA_VISIBLE_DEVICES=$CUDAID python test.py $TASKID 

Example

Image

(🐱 in this figure is a 6-year-old cat, his name is Alan.)

Models

Please download our model from 🤗 Models.

Reference

📑 If you find our project helpful to your research, please consider citing:

@inproceedings{
wang2024browse,
title={Browse and Concentrate: Comprehending Multimodal Content via Prior-{LLM} Context Fusion},
author={Wang, Ziyue and Chen, Chi and Zhu, Yiqi and Luo, Fuwen and Li, Peng and Yan, Ming and Zhang, Ji and Huang, Fei and Sun, Maosong and Liu, Yang},
booktitle={The 62nd Annual Meeting of the Association for Computational Linguistics},
year={2024},
}

Acknowledgement

Our models are build upon MMICL and InstructBLIP.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published