Skip to content

youngwk/BridgeGapExplanationPAMC

Repository files navigation

Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification (CVPR 2023) | Paper

Project page | Poster | Slide | YouTube

Youngwook Kim1, Jae Myung Kim2, Jieun Jeong1,3, Cordelia Schmid4, Zeynep Akata2,5, and Jungwoo Lee1,3

1 Seoul National Univeristy 2 University of Tübingen 3 HodooAI Lab 4 Inria, Ecole normale sup'erieure, CNRS, PSL Research University 5 MPI for Intelligent Systems

Primary contact : ywkim@cml.snu.ac.kr (Homepage)

Observation on Model Explanation Proposed Method (BoostLU)

Abstract

Due to the expensive costs of collecting labels in multi-label classification datasets, partially annotated multi-label classification has become an emerging field in computer vision. One baseline approach to this task is to assume unobserved labels as negative labels, but this assumption induces label noise as a form of false negative. To understand the negative impact caused by false negative labels, we study how these labels affect the model's explanation. We observe that the explanation of two models, trained with full and partial labels each, highlights similar regions but with different scaling, where the latter tends to have lower attribution scores. Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels. Even with the conceptually simple approach, the multi-label classification performance improves by a large margin in three different datasets on a single positive label setting and one on a large-scale partial label setting.

Dataset Preparation

See the README.md file in the data directory for instructions on downloading and setting up the datasets.

Model Training & Evaluation

You can train and evaluate the models by

python main.py --dataset [dataset] \
               --largelossmod_scheme [scheme] \
               --lr 1e-5 --num_epochs 10 --alpha 5

where [data_path] in {pascal, coco, nuswide, cub}, [largelossmod_scheme] in {LL-R, LL-Ct, LL-Cp}.

For now, we only support training codes for datasets with single positive label. Openimages will also be supported soon.

Quantitative Results

Single positive label Large-scale partial label (OpenImages V3)

Qualitative Results

How to cite

If our work is helpful, please consider citing our paper.

@InProceedings{Kim_2023_CVPR,
    author    = {Kim, Youngwook and Kim, Jae Myung and Jeong, Jieun and Schmid, Cordelia and Akata, Zeynep and Lee, Jungwoo},
    title     = {Bridging the Gap Between Model Explanations in Partially Annotated Multi-Label Classification},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {3408-3417}
}

Acknowledgements

Our code is heavily built upon Multi-Label Learning from Single Positive Labels and Large Loss Matters in Weakly Supervised Multi-Label Classification.

About

[CVPR 2023] Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages