Skip to content

CausalVLR: A Toolbox and Benchmark for Visual-Linguistic Causal Reasoning (视觉-语言因果推理开源框架)

License

Notifications You must be signed in to change notification settings

HCPLab-SYSU/CausalVLR

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CausalVLR is a python open-source framework for causal relation discovery, causal inference that implements state-of-the-art causal learning algorithms for various visual-linguistic reasoning tasks, such as VQA, Image/Video Captioning, Model Generalization and Robustness, Medical Report Generation, etc.

PyPI docs license open issues issue resolution

📘Documentation | 🛠️Installation | 👀Model Zoo | 🆕Update News | 🚀Ongoing Projects | 🤔Reporting Issues


CausalVLR is a python open-source framework based on PyTorch for causal relation discovery, causal inference that implements state-of-the-art causal learning algorithms for various visual-linguistic reasoning tasks, detail see on Documentation.

Framework Overview

Image

Major features
  • Modular Design

    We decompose the causal framework of visual-linguistic tasks into different components and one can easily construct a customized causal-reasoning framework by combining different modules.

  • Support of multiple tasks

    The toolbox directly supports multiple visual-linguistic reasoning tasks such as VQA, Image/Video Caption, Medical Report Generation, Model Generalization and Robustness and so on.

  • State of the art

    The toolbox stems from the codebase developed by the HCPLab team, who dedicated to solving a variety of complex logic tasks through causal reasoning, and we keep pushing it forward.

Note: The framework is actively being developed. Feedbacks (issues, suggestions, etc.) are highly encouraged.

🔥 2024.04.07.

🔥 2023.12.12.

🔥 2023.8.19.

  • v0.0.2 was released in 8/19/2023
  • Support CaCo-CoT for Faithful Reasoning task in LLMs

🔥 2023.6.29.



CaCo-CoT-Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs

Image

Method ScienceQA Com2sence BoolQ
GPT-3.5-turbo 79.3 70.1 71.7
CoT 78.4 63.6 71.1
SC-CoT 84.0 66.0 71.4
C-CoT 82.5 68.8 70.5
CaCo-CoT 86.5(+2.5) 73.5(3.4) 73.5(1.8)

VLCI-Visual Causal Intervention for Radiology Report Generation

Image

Dataset B@1 B@2 B@3 B@4 Meteor Rough-L CIDEr
IU-Xray 50.5 33.4 24.5 18.9 20.4 39.7 45.6
MIMIC-CXR 40.0 24.5 16.5 11.9 15.0 28.0 19.0

CMCIR-Cross-modal Causal Intervention for Event-level Video Question Answering

Image

Method Basic Attribution Introspection Counterfactual Forecasting Reverse All
VQAC 34.02 49.43 34.44 39.74 38.55 49.73 36.00
MASN 33.83 50.86 34.23 41.06 41.57 50.80 36.03
DualVGR 33.91 50.57 33.40 41.39 41.57 50.62 36.07
HCRN 34.17 50.29 33.40 40.73 44.58 50.09 36.26
CMCIR 36.10 (+1.93) 52.59 (+1.73) 38.38 (+3.94) 46.03 (+4.64) 48.80 (+4.22) 52.21 (+1.41) 38.58 (+1.53)

Please see Overview for the general introduction of CausalVLR.

For detailed user guides and advanced guides, please refer to our documentation, and here is the code structure of toolbox.

Image

Installation

Please refer to Installation for installation instructions in documentation.

Briefly, to use CausalVLR, we could install it using pip:

git clone https://github.com/HCPLab-SYSU/CausalVLR.git
pip install -e .

or install from PyPI:

pip install hcpcvlr

Running examples

For causal discovery, there are various running examples in the test directory.

For the implemented modules, we provide unit tests for the convenience of developing your own methods.

👀 Model Zoo 🔝

Please feel free to let us know if you have any recommendation regarding datasets with high-quality. We are grateful for any effort that benefits the development of causality community.

Task Model Benchmark
Medical Report Generation VLCI IU-Xray, MIMIC-CXR
VQA CMCIR SUTD-TrafficQA, TGIF-QA, MSVD-QA, MSRVTT-QA
Visual Causal Scene Discovery VCSR NExT-QA, Causal-VidQA, and MSRVTT-QA
Model Generalization and Robustness Robust Fine-tuning ImageNet-V2, ImageNet-R, ImageNet-Sketch, ObjectNet, ImageNet-A
Causality-Aware Medical Diagnosis CAMDA MuZhi, DingXiang
Faithful Reasoning in LLMs CaCo-CoT ScienceQA, Com2Sense, BoolQ

This project is released under the Apache 2.0 license.

If you find this project useful in your research, please consider cite:

@misc{liu2023causalvlr,
      title={CausalVLR: A Toolbox and Benchmark for Visual-Linguistic Causal Reasoning}, 
      author={Yang Liu and Weixing Chen and Guanbin Li and Liang Lin},
      year={2023},
      eprint={2306.17462},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Please feel free to open an issue if you find anything unexpected. We are always targeting to make our community better!

CausalVLR is an open-source project and We appreciate all the contributors who implement their methods or add new features and users who give valuable feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their new models.

🪐 The review paper here can provide some help

Causal Reasoning Meets Visual Representation Learning: A Prospective Study

Machine Intelligence Research (MIR) 2022
A Review paper for causal reasoning and visual representation learning
Image

@article{
  liu2022causal,
  title={Causal Reasoning Meets Visual Representation Learning: A Prospective Study},
  author={Liu, Yang and Wei, Yu-Shen and Yan, Hong and Li, Guan-Bin and Lin, Liang},
  journal={Machine Intelligence Research},
  pages={1--27},
  year={2022},
  publisher={Springer}
  }
  • HCP-Diffusion is a toolbox for Stable Diffusion models based on 🤗 Diffusers. It facilitates flexiable configurations and component support for training, in comparison with webui and sd-scripts.

About

CausalVLR: A Toolbox and Benchmark for Visual-Linguistic Causal Reasoning (视觉-语言因果推理开源框架)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 100.0%