Skip to content

[CVPRW 2023] Official implementation of "Benchmarking Robustness to Text-Guided Corruptions".

License

Notifications You must be signed in to change notification settings

mofayezi/RobuText

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Benchmarking Robustness to Text-Guided Corruptions

Mohammadreza Mofayezi and Yasamin Medghalchi

arXiv arXiv poster

Abstract

This study investigates the robustness of image classifiers to text-guided corruptions. We utilize diffusion models to edit images to different domains. Unlike other works that use synthetic or hand-picked data for benchmarking, we use diffusion models as they are generative models capable of learning to edit images while preserving their semantic content. Thus, the corruptions will be more realistic and the comparison will be more informative. Also, there is no need for manual labeling and we can create large-scale benchmarks with less effort. We define a prompt hierarchy based on the original ImageNet hierarchy to apply edits in different domains. As well as introducing a new benchmark we try to investigate the robustness of different vision models. The results of this study demonstrate that the performance of image classifiers decreases significantly in different language-based corruptions and edit domains. We also observe that convolutional models are more robust than transformer architectures. Additionally, we see that common data augmentation techniques can improve the performance on both the original data and the edited images. The findings of this research can help improve the design of image classifiers and contribute to the development of more robust machine learning systems.


Getting started

Requirements

The code requires Python 3.8 or later. The file requirements.txt contains the full list of required Python modules.

pip install -r requirements.txt

Resources

The code was tested on a GeForce RTX 3090 24GB but should work on other cards with at least 12GB VRAM.

Generating the Data

You can generate the text-guided benchmark using the command below:

python generate_data.py --dataset_path /imagenet/val --output_path data-100-10 --num_classes 100 --num_images 10 --sub_class all --seed 10

Note that you need to specify the ImageNet dataset path with the dataset_path argument.

Evaluating Models

You can run the evaluation code using the command bellow:

python evaluate.py --data_path ./data-100-10/ --output_path data-100-10

Acknowledgments

The overall code framework was adapted from robustness. The code for making image edits was borrowed from prompt-to-prompt.

Citation

@InProceedings{Mofayezi_2023_CVPR,
    author    = {Mofayezi, Mohammadreza and Medghalchi, Yasamin},
    title     = {Benchmarking Robustness to Text-Guided Corruptions},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2023},
    pages     = {779-786}
}

About

[CVPRW 2023] Official implementation of "Benchmarking Robustness to Text-Guided Corruptions".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages