Skip to content

The official implementation of the paper MC-CoT: Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training.

chengtan9907/mc-cot

Repository files navigation

Multimodal Consistent Chain-of-Thought (MC-CoT): Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training

This repository contains the code for the paper "Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training". Our work focuses on enhancing the capabilities of smaller multimodal reasoning models to achieve performance comparable to larger models.

Abstract

Multimodal reasoning is a challenging task that requires models to reason across multiple modalities to answer questions. Existing approaches have made progress by incorporating language and visual modalities into a two-stage reasoning framework, separating rationale generation from answer inference. However, these approaches often fall short due to the inadequate quality of the generated rationales. In this work, we delve into the importance of rationales in model reasoning. We observe that when rationales are completely accurate, the model's accuracy significantly improves, highlighting the need for high-quality rationale generation. Motivated by this, we propose MC-CoT, a self-consistency training strategy that generates multiple rationales and answers, subsequently selecting the most accurate through a voting process. This approach not only enhances the quality of generated rationales but also leads to more accurate and robust answers. Through extensive experiments, we demonstrate that our approach significantly improves model performance across various benchmarks. Remarkably, we show that even smaller base models, when equipped with our proposed approach, can achieve results comparable to those of larger models, illustrating the potential of our approach in harnessing the power of rationales for improved multimodal reasoning.

A schematic comparison of different Chain-of-Thought (CoT) prompt-based reasoning methods including:

  • Basic input-output prompt.

  • Chain-of-Thought with intermediate chain-like reasoning.

  • Chain-of-Thought Self-Consistency (CoT-SC), utilizing multiple independent thought chains.

  • Multimodal-CoT, inferring rationale using text and image inputs.

  • MC-CoT, which derives high-quality rationale through word-level voting.

    Framework Comparison

Datasets

The models are trained and evaluated on two open-source datasets:

The processed vision features for ScienceQA are available at huggingfcae vision features. all-MiniLM-L6-v2 and unifiedqa-t5-base can be downloaded at huggingface sentence-transformers and huggingface unifiedqa-t5-base.

The pretrained base models on ScienceQA is available at mc-cot/release/pretrained-base-model-on-scienceqa.

The folder with all related files looks like:

mc-cot
├── assets
├── results
│   ├── base_pretrained_scienceqa
│   │   ├── answer
│   │   │   ├── ...
│   │   ├── rationale
│   │   │   ├── ...
├── models
│   ├── all-MiniLM-L6-v2
│   ├── unifiedqa-t5-base
├── data
│   ├── vision_features
│   ├── scienceqa

Usage

To inference with our pretrained weights (results/base_pretrained_scienceqa/), run run_eval_scienceqa.sh.

To train the model by yourself, please run run_train_scienceqa.sh.

Acknowledgements

We highly thank "Multimodal Chain-of-Thought Reasoning in Language Models". paper, code

Reference

@article{tan2023boosting,
  title={Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training},
  author={Tan, Cheng and Wei, Jingxuan and Gao, Zhangyang and Sun, Linzhuang and Li, Siyuan and Yang, Xihong and Li, Stan Z},
  journal={arXiv preprint arXiv:2311.14109},
  year={2023}
}

About

The official implementation of the paper MC-CoT: Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training.

Resources

Stars

Watchers

Forks

Packages

No packages published