Skip to content

The official repo for "Where do Large Vision-Language Models Look at when Answering Questions?"

License

Notifications You must be signed in to change notification settings

bytedance/LVLM_Interpretation

Repository files navigation

Where do Large Vision-Language Models Look at when Answering Questions?

The official repo for "Where do Large Vision-Language Models Look at when Answering Questions?" A PyTorch implementation for a salieny heatmap visualization method that interprets the open-ended responses of LVLMs conditioned on an image.

Installation

First clone this repository and navigate to the folder.

The environment installation mainly follows LLaVA. You can update the pip and install the dependencies using:

$ pip install --upgrade pip
$ bash install.sh

Model Preparation

For Mini-Gemini models, please follow the instructions in MGM to download the models and put them in the folders following Structure

Quick Start

To generate the saliency heatmap of an LVLM when generating free-form responses, an example command is as follows, with the hyperparameters passed as arguments:

$ python3 main.py --method iGOS+ --model llava --dataset <dataset name> --data_path <path/to/questions> --image_folder <path/to/images> --output_dir <path/to/output> --size 32 --L1 1.0 --L2 0.1 --L3 10.0 --ig_iter 10 --gamma 1.0 --iterations 5 --momentum 5

The explanations of each argument can be found in args.py

Datasets

You may find the datasets at https://huggingface.co/datasets/xiaoying0505/LVLM_Interpretation to reproduce the results in the paper.

Acknowledgement

Some parts of the code are built upon IGOS_pp. And we use the open-source LVLMs LLaVA-1.5, LLaVA-OneVision, Cambrian and Mini-Gemini in this project. We thank the authors for their excellent work.

About

The official repo for "Where do Large Vision-Language Models Look at when Answering Questions?"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages