Skip to content
" End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization" accepted at CVPR2019
Python
Branch: master
Clone or download
Latest commit 62c811c May 10, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cifar_exps Check runnable code Mar 8, 2019
configs
process initial commit Mar 4, 2019
tfops Check runnable code Mar 8, 2019
utils Check runnable code Mar 8, 2019
.gitignore Add run Mar 7, 2019
LICENSE Update LICENSE May 10, 2019
README.md Formatting Mar 8, 2019
cifarhashtree.png pdf to png Mar 8, 2019

README.md

End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization

This repository has the source code for the paper "End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization"(CVPR19).

Citing this work

@inproceedings{jeongCVPR19,
    title= {End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization},
    author={Jeong, Yeonwoo and Kim, Yoonsung and Song, Hyun Oh},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2019}
}

Installation

Prerequisites

  1. Generate directory and change path(=ROOT) in configs/path.py
ROOT = '(user enter path)'
cd (ROOT)
mkdir exp_results
mkdir cifar_processed
  1. Download and unzip dataset Cifar-100
cd (ROOT)
wget https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
tar zxvf cifar-100-python.tar.gz

Processing Data

cd process
python cifar_process.py

Training Procedure

  • In this code, we provide experiment code on 'Cifar-100'(cifar_exps/)
  1. Metric learning model(cifar_exps/metric/)
    • Run train_model in main.py to train the model for specific parameter.
    • Run integrate_results_and_preprocess in main.py to integrate results and preprocess before running 'ours'.
  2. Ours proposed in the paper(cifar_exps/ours/)
    • Run train_model in main.py to train the model for specific parameter.
    • Run integrate_results in main.py to integrate results.

Evaluation

  • Evaluation code is in utils/evaluation.py.
  • The performance of hash table structured contructed by ours method is evaluated with 3 different metric(NMI, precision@k, SUF).

License

MIT License

You can’t perform that action at this time.