Skip to content
View ChartMimic's full-sized avatar

Block or report ChartMimic

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ChartMimic/README.md

ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation

Data License Code License Python 3.9+

🌐 Website | πŸ† Leaderboard | πŸ“š Data | πŸ“ƒ Paper

πŸŽ‰ What's New

  • [2024.06.13] πŸ“£ ChartMimic is released.

🎏 Introduction

ChartMimic aims at assessing the visually-grounded code generation capabilities of large multimodal models (LMMs). ChartMimic utilizes information-intensive visual charts and textual instructions as inputs, requiring LMMs to generate the corresponding code for chart rendering.

ChartMimic includes 1,000 human-curated (figure, instruction, code) triplets, which represent the authentic chart use cases found in scientific papers across various domains(e.g., Physics, Computer Science, Economics, etc). These charts span 18 regular types and 4 advanced types, diversifying into 191 subcategories. Furthermore, we propose multi-level evaluation metrics to provide an automatic and thorough assessment of the output code and the rendered charts. Unlike existing code generation benchmarks, ChartMimic places emphasis on evaluating LMMs' capacity to harmonize a blend of cognitive capabilities, encompassing visual understanding, code generation, and cross-modal reasoning.

πŸ“„ Table of Contents

Click to expand the table of contents

πŸš€ Quick Start

Here we provide a quick start guide to evaluate LMMs on ChartMimic.

Setup Environment

conda env create -f environment.yaml
conda activate chartmimic

Set up the environment variables in .env file.

PROJECT_PATH=${YOUR_PROJECT_PATH}
OPENAI_BASE_URL=${YOUR_OPEN_AI_BASE_URL}
OPENAI_API_KEY=${YOUR_OPENAI_API_KEY}
ANTHROPIC_API_KEY=${YOUR_ANTHROPIC_API_KEY}
GOOGLE_API_KEY=${YOUR_ANTHROPIC_API_KEY}

Download Data

You can download the whole evaluation data by running the following command:

cd ChartMimic # cd to the root directory of this repository
mkdir dataset
wget https://huggingface.co/datasets/ChartMimic/ChartMimic/resolve/main/test.tar.gz
tar -xzvf test.tar.gz -C dataset

Evaluate Models

Task 1: Direct Mimic

Example script for gpt-4-vision-preview on the Direct Mimic task:

export PROJECT_PATH=${YOUR_PROJECT_PATH}

# Step 1: Get Model Reponse
bash scripts/direct_mimic/run_generation.sh

# Step 2: Run the Code in the Response
bash scripts/direct_mimic/run_code.sh

# Step 3: Get Lowlevel Score
bash scripts/direct_mimic/run_evaluation_lowlevel.sh

# Step 4: Get Highlevel Score
bash scripts/direct_mimic/run_evaluation_highlevel.sh

Task 2: Customized Mimic

Example script for gpt-4-vision-preview on the Customized Mimic task:

export PROJECT_PATH=${YOUR_PROJECT_PATH}

# Step 1: Get Model Reponse
bash scripts/customized_mimic/run_generation.sh

# Step 2: Run the Code in the Response
bash scripts/customized_mimic/run_code.sh

# Step 3: Get Lowlevel Score
bash scripts/customized_mimic/run_evaluation_lowlevel.sh

# Step 4: Get Highlevel Score
bash scripts/customized_mimic/run_evaluation_highlevel.sh

Different LMMs

We now offer configuration for 14 SOTA LMM models (gpt-4-vision-preview, claude-3-opus-20240229, gemini-pro-vision, Phi-3-vision-128k-instruct,MiniCPM-Llama3-V-2_5,InternVL-Chat-V1-5, cogvlm2-llama3-chat-19B,deepseekvl,llava-v1.6-mistral-7b-hf,llava-v1.6-34b-hf, idefics2-8b, llava-v1.6-vicuna-13b-hf,llava-v1.6-vicuna-7b-hf and qwenvl).

πŸ“š Data

You can download the whole evaluation data by running the following command:

cd ChartMimic # cd to the root directory of this repository
mkdir dataset
wget https://huggingface.co/datasets/ChartMimic/ChartMimic/resolve/main/test.tar.gz
tar -xzvf test.tar.gz -C dataset

To help researchers quickly understand evaluation data, we provide Dataset Viewer at Huggingface Dataset: πŸ€— ChartMimic.

The file structure of evaluation data is as follows:

.
β”œβ”€β”€ customized_500/ # Data for Customized Mimic
β”œβ”€β”€ ori_500/  # Data for Direct Mimic
└── test.jsonl  # Data for both tasks

πŸ’¬ Citation

If you find this repository useful, please consider giving star and citing our paper:

@article{
      shi2024chartmimic,
      title={ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation},
      author={Chufan Shi and Cheng Yang and Yaxin Liu and Bo Shui and Junjie Wang and Mohan Jing and Linran Xu and Xinyu Zhu and Siheng Li and Yuxiang Zhang and Gongye Liu and Xiaomei Nie and Deng Cai and Yujiu Yang},
      year={2024},
      journal={arXiv preprint arXiv:2406.09961},
}

πŸ“Œ License

Apache-2.0 license

The ChartMimic data and codebase is licensed under a Apache-2.0 License.

πŸŽ™οΈ Acknowledgements

We would like to express our gratitude to agentboard for their project codebase.

Popular repositories Loading

  1. ChartMimic ChartMimic Public

    ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation

    Python 90

  2. ChartMimic.github.io ChartMimic.github.io Public

    JavaScript