Skip to content

Latest commit

 

History

History

imdb-wiki-dir

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

ConR on IMDB-WIKI-DIR

This repository contains the implementation of ConR on IMDB-WIKI-DIR dataset.

The imbalanced regression framework and LDS+FDS are based on the public repository of Gong et al., ICML 2022.

Installation

Prerequisites

  1. Download and extract IMDB faces and WIKI faces respectively using
python download_imdb_wiki.py
  1. We use the standard train/val/test split file (imdb_wiki.csv in folder ./data) provided by Yang et al.(ICML 2021), which is used to set up balanced val/test set. To reproduce the results in the paper, please directly use this file. You can also generate it using
python data/create_imdb_wiki.py
python data/preprocess_imdb_wiki.py

Dependencies

  • PyTorch (>= 1.2, tested on 1.6)
  • numpy, pandas, scipy, tqdm, matplotlib, PIL, wget

Code Overview

Main Files

  • train.py: main training and evaluation script
  • create_imdb_wiki.py: create IMDB-WIKI raw meta data
  • preprocess_imdb_wiki.py: create IMDB-WIKI-DIR meta file imdb_wiki.csv with balanced val/test set

Main Arguments

  • --data_dir: data directory to place data and meta file
  • --reweight: cost-sensitive re-weighting scheme to use
  • --loss: training loss type
  • --conr: wether to use ConR or not.
  • -w: distance threshold (default 1.0)
  • --beta: the scale of ConR loss (default 4.0)
  • -t: temperature(default 0.2)
  • -e: pushing power scale(default 0.01)

Getting Started

1. Train baselines

To use Vanilla model

python train.py --batch_size 64 --lr 2.5e-4

2. Train a model with ConR

batch size 64, learning rate 2.5e-4
python train.py --batch_size 64 --lr 2.5e-4 --conr -w 1.0 --beta 4.0 -e 0.01

3. Evaluate and reproduce

If you do not train the model, you can evaluate the model and reproduce our results directly using the pretrained weights from the anonymous links below.

python train.py --evaluate [...evaluation model arguments...] --resume <path_to_evaluation_ckpt>