Skip to content
/ ICon Public

Code for the paper "ICON: Improving Inter-Report Consistency of Radiology Report Generation via Lesion-aware Mix-up Augmentation"

License

Notifications You must be signed in to change notification settings

wjhou/ICon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ICON: Improving Inter-Report Consistency of Radiology Report Generation via Lesion-aware Mix-up Augmentation

This repository is the implementation of ICON: Improving Inter-Report Consistency of Radiology Report Generation via Lesion-aware Mix-up Augmentation. Before running the code, please install the prerequisite libraries, and follow our instructions to replicate the experiments. Codes and Model Checkpoints are coming soon.

Overview

Previous research on radiology report generation has made significant progress in terms of increasing the clinical accuracy of generated reports. In this paper, we emphasize another crucial quality that it should possess, i.e., inter-report consistency, which refers to the capability of generating consistent reports for semantically equivalent radiographs. This quality is even of greater significance than the overall report accuracy in terms of ensuring the system's credibility, as a system prone to providing conflicting results would severely erode users' trust. Regrettably, existing approaches struggle to maintain inter-report consistency, exhibiting biases towards common patterns and susceptibility to lesion variants. To address this issue, we propose ICON, which improves the inter-report consistency of radiology report generation. Aiming at enhancing the system's ability to capture the similarities in semantically equivalent lesions, our approach involves first extracting lesions from input images and examining their characteristics. Then, we introduce a lesion-aware mix-up augmentation technique to ensure that the representations of the semantically equivalent lesions align with the same attributes, by linearly interpolating them during the training phase. Extensive experiments on three publicly available chest X-ray datasets verify the effectiveness of our approach, both in terms of improving the consistency and accuracy of the generated reports. Alt text

Requirements

  • python>=3.9.0
  • torch==2.1.0
  • transformers==4.36.2

Please install dependencies by using the following command:

conda env create -f environment.yml # Untested
conda activate icon

Data Preparation and Preprocessing

Please download the three datasets: IU X-ray, MIMIC-ABN and MIMIC-CXR, and put the annotation files into the data folder.

  • For observation preprocessing, we use CheXbert to extract relevant observation information. Please follow the instruction to extract the observation tags.
  • For CE evaluation, please clone CheXbert into the folder and download the checkpoint chexbert.pth into CheXbert:
git clone https://github.com/stanfordmlgroup/CheXbert.git

Model Checkpoints

Model checkpoints of two datasets are available at:

Citation

If you use the ICon, please cite our paper:

@inproceedings{hou-etal-2024-icon,
    title = "{ICON}: Improving Inter-Report Consistency of Radiology Report Generation via Lesion-aware Mix-up Augmentation",
    author = "Hou, Wenjun and Cheng, Yi and Xu, Kaishuai and Hu, Yan and Li, Wenjie and Liu, Jiang",
}

About

Code for the paper "ICON: Improving Inter-Report Consistency of Radiology Report Generation via Lesion-aware Mix-up Augmentation"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published