Skip to content

Image Classifier

Timo Denk edited this page Oct 19, 2018 · 10 revisions

Overview about our approaches of tackling the image classification task on Tiny ImageNet.

Models

This table contains the test accuracy of noteworthy training runs.

Name Test acc. [%] Steps Notes
resnet003 57[1] ? Configuration from here
resnet_t01 49.11 131.3k ResNet baseline (re-written code)
resnet_t34a_adam 53.53 48.4k Deeper ResNet with Adam optimizer
resnet_crowdai_dr0.8 55.81 60.92k Aggressive dropout

[1] Measurement was only taken on parts of the validation dataset (1k batch)

Adversarial Logit Pairing

Code at TF models GitHub repo contains checkpoints and training procedures for Tiny ImageNet classifiers. The paper can be found here.

It seems to be a reasonable approach to load the model weights and fine-tune.

Shell commands
The code is not Python 3.6 compatible; Python 2.7 works.
Training from scratch

python train.py --model_name="resnet_v2_50" --hparams="train_adv_method=clean" --dataset="tiny_imagenet" --dataset_image_size=64 --output_dir="./tmp/adv_train" --tiny_imagenet_data_dir="tiny-imagenet-tfrecord"

Evaluation

python eval.py --train_dir=./tmp/adv_train --trainable_scopes="resnet_v2_50/logits,resnet_v2_50/postnorm" --dataset=tiny_imagenet --dataset_image_size=64 --adv_method=clean --hparams="eval_batch_size=50" --eval_once=True --tiny_imagenet_data_dir="tiny-imagenet-tfrecord"

Related Work

ResNet with Trainable VQ-Layers

We have conducted several experiments with trainable vector quantization (VQ) layers in the ResNet with ALP weights. Trainable VQ-layers learn their embedding space with gradient descent on the loss surface induced by three components: alpha, beta, and coulomb loss.

The three losses have the following effects (intuitively):

  • alpha: moves embedding space vectors closer to the inputs that were projected onto them
  • beta: moves all embedding space vectors towards the mass of inputs
  • coulomb: discourages embedding space vectors to be close to each other

The three hyperparameters were tuned with the goal of increasing the accuracy on validation data and increasing the minimal distance between each embedding vector and its closest neighbor (supposedly equivalent to the robustness of the model). Both measures (accuracy scalar and min distance histogram) were logged to TensorBoard. Eventually, hyperparameters with the magnitudes (rough values) alpha=1e-1, beta=2e-4, gamma=5e-4 seemed to be appropriate. These values were found by looking at the effect that each part of the loss has on the total loss term. That way alpha could be set to have a share of ~50%, beta 25%, and gamma 25%.

Some noteworthy submissions are (resnet-base-newcoulxn128x5em4x2repl, resnet-base-parallel3x16x64x128).