Skip to content

yeonghwansong/UOLwRPS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unsupervised Object Localization with Representer Point Selection

Gwangju Institute of Science and Technology*, Massachusetts Institute of Technology
In ICCV 2023.

paper | arxiv

Overview

The paper introduces a novel unsupervised object localization method that leverages self-supervised pre-trained models without the need for additional finetuning. Traditional methods often utilize class-agnostic activation maps or self-similarity maps from a pre-trained model. However, these maps have limitations in explaining the model's predictions. This work proposes an unsupervised object localization technique based on representer point selection. In this approach, the model's predictions are represented as a linear combination of representer values of training points. By selecting these representer points, which are the most influential examples for the model's predictions, the model can offer insights into its prediction process by showcasing relevant examples and their significance. The proposed method surpasses the performance of state-of-the-art unsupervised and self-supervised object localization techniques on various datasets. It even outperforms some of the latest weakly supervised and few-shot methods.

If you find this repository useful for your publications, please consider citing out paper.

@InProceedings{Song_2023_ICCV,
    author    = {Song, Yeonghwan and Jang, Seokwoo and Katabi, Dina and Son, Jeany},
    title     = {Unsupervised Object Localization with Representer Point Selection},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {6534-6544}
}

Dependencies

- pytorch >= 1.10.0
- torchvision >= 0.11.0
- efficientnet-pytorch >= 0.7.1
- tqdm >= 4.65.0

Dataset

You will need to download the images and structure the data directory referring to this repository.

You will need to download the images in your data root directory to evaluate our model on each dataset.

Inference

  • Download pre-trained weights: Google Drive

  • Run the following command to reproduce our results

ImageNet-1K / OpenImages30K

python main.py --dataset EVALUATION_DATASET --loc_network EVALUATION_NETWORK --data_dir YOUR_DATAROOT

CUB-200-2011 / Stanford Cars / FGVC-Aircraft / Stanford Dogs

python main.py --dataset EVALUATION_DATASET --loc_network EVALUATION_NETWORK --data_dir YOUR_DATAROOT --image_size 480 --crop_size 448 --resnet_downscale 32 

Segmentation on CUB-200-2011

python main.py --dataset CUBSEG --loc_network EVALUATION_NETWORK --data_dir YOUR_DATAROOT --image_size 480 --crop_size 448 --resnet_downscale 32 

Other Arguments

Employing class-specific parameters: --classwise

Setting the sampling ratio: i.e. --sampling_ratio 0.1

Zero-shot transferring: i.e. --base_dataset CIFAR10

Evaluating with classifier: i.e. --cls_network ResNet50

About

Unsupervised Object Localization with Representer Point Selection (ICCV 2023)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages