Skip to content

Official implementation of XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training

Notifications You must be signed in to change notification settings

White65534/XLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 

Repository files navigation

XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training

Author: Biao Wu*, Yutong Xie*, Zeyu Zhang, Minh Hieu Phan, Qi Chen, Ling Chen, Qi Wu

*Contributed Equally. ⁺Corresponding author: qi.wu01@adelaide.edu.au.

[Paper Link] [Papers With Code]

XLIP

News

[07/30/2024] 🎉🎉 Our paper has been prompted by CVer!

Abstract

Vision-and-language pretraining (VLP) in the medical field utilizes contrastive learning on image-text pairs to achieve effective transfer across tasks. Yet, current VLP approaches with the masked modelling strategy face two challenges when applied to the medical domain. First, current models struggle to accurately reconstruct key pathological features due to the scarcity of medical data. Second, most methods only adopt either paired image-text or image-only data, failing to exploit the combination of both paired and unpaired data. To this end, this paper proposes a XLIP (Masked modelling for medical Language-Image Pre-training) framework to enhance pathological learning and feature learning via unpaired data. First, we introduce the attention-masked image modelling (AttMIM) and entity-driven masked language modelling module (EntMLM), which learns to reconstruct pathological visual and textual tokens via multi-modal feature interaction, thus improving medical-enhanced features. The AttMIM module masks a portion of the image features that are highly responsive to textual features. This allows XLIP to improve the reconstruction of highly similar image data in medicine efficiency. Second, our XLIP capitalizes unpaired data to enhance multimodal learning by introducing disease-kind prompts. The experimental results show that XLIP achieves SOTA for zero-shot and fine-tuning classification performance on five datasets.

Introduction

XLIP (X-ray Language-Image Pre-training) is a multimodal model designed to bridge the gap between medical text and X-ray images. Inspired by OpenAI's CLIP, XLIP aims to provide a unified feature space for both text and images, specifically focusing on the medical domain.

Citation

@article{wu2024xlip,
  title={XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training},
  author={Wu, Biao and Xie, Yutong and Zhang, Zeyu and Phan, Minh Hieu and Chen, Qi and Chen, Ling and Wu, Qi},
  journal={arXiv preprint arXiv:2407.19546},
  year={2024}
}

Features

  • Multimodal Understanding: XLIP is trained to understand both text and X-ray images, facilitating various downstream medical tasks.
  • Domain-Specific: Unlike general-purpose models, XLIP is trained on a specialized dataset of medical text and X-ray images.
  • Easy to Use: With a simple API and clear documentation, integrating XLIP into your workflow is seamless.

Requirements

  • Python >= 3.7
  • PyTorch >= 1.8
  • CUDA-compatible GPU (optional but recommended)

Installation

To install XLIP, you can clone this repository and install the required packages.

git clone https://github.com/your_username/XLIP.git
cd XLIP
pip install -r requirements.txt

Quick Start

To encode an X-ray image and a medical text snippet into the same feature space, you can use the following code:

from xlip import XLIPModel

# Initialize model
model = XLIPModel()

# Sample X-ray image and text
xray_image = "path/to/xray/image.jpg"
medical_text = "This X-ray shows signs of pneumonia."

# Encode into feature space
image_features, text_features = model.encode(xray_image, medical_text)

Contributing

We welcome contributions to XLIP! If you have a feature request, bug report, or want to contribute code, please open an issue or pull request.

License

This project is licensed under the MIT License - see the LICENSE file for details.


About

Official implementation of XLIP: Cross-modal Attention Masked Modelling for Medical Language-Image Pre-Training

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published