Skip to content

Class Project Leaderboard

Bart van Merriënboer edited this page Apr 4, 2015 · 30 revisions

Instructions

Use this wiki to publicize your results on the Dogs vs. Cats class project.

Every time you get a better result on the challenge, you can insert an entry to the list below (at the appropriate place please, so that the list stays sorted by test error rate). Make sure to include a link to a blog post detailing how you achieved that result. The format for entries is as follows:

<test error rate> (<train error rate>, <valid error rate>): <short description> (<link to blog post>)

Leaderboard

  • 0.0236 (0.00005, 0.0292) : 260x260 inputs, 6 conv/pooling with more capacity + 2 fully-connected with dropout on some layers. Random transformations aggregated with a new multi-view technique. (blog post 16 March)
  • 0.0451 (0.0187, 0.0447) : 260x260 inputs, 6 conv layers with pooling + 2 fully connected layers with dropout (0.5) and weight decay (1e-3). No data augmentation. (blog post)
  • 0.0556 (N/A, N/A) : 8-model ensemble combined with logistic regression, models trained with bagging, 6 conv layers + 2 dense layers, with flipping and some small rotations as data augmentation (blog post)
  • 0.0599 (0.0675[no average], 0.0688[no average]) : 260x260 inputs, 6 conv/pooling with more feature maps + 2 fully-connected with dropout. Flipped and small rotations during training. Arithmetic average of the predictions at teest time. (blog post 27 february)
  • 0.0624 (0.0269, 0.0612) : 514x514 inputs, 8 conv/pooling + 2 fully-connected. No Augmentation, no ensembles. Could potentially train longer and yield better results. (blog post 17 march)
  • 0.0752 (0.0057, 0.0792) : 94x94 inputs, 6 conv/pooling + 2 fully-connected. Data augmentation. (blog post 27 march)
  • 0.0752 (0.01415, 0.074) : 260x260 inputs, 6 conv/pooling with more feature maps + 2 fully-connected with dropout. no data augmentation. (blog post 26 february)
  • 0.1072 (0.1104, 0.1020) : 5 convolutions + 2 fully-connected, ReLU (blog post)
  • 0.112 (0.0598, 0.1068) : 260x260 inputs, 6 conv/pooling + 2 fully-connected. no data augmentation, no dropout (blog post 10 february)
  • 0.118 (0.0892, 0.1288) : 224x224 inputs, 4 conv/pooling + 3 fully-connected + softmax, ReLU everywhere, L2-norm (blog post)
  • 0.1212 (0.0144, 0.1068) : 112x112 inputs, 4 conv/pool + 2 fully-connected, ReLU everywhere), Nesterov momentum (blog post 13 mar.)
  • 0.1280 (0.0946, 0.1064) : 124x124 inputs, 3 conv/pooling + 3 fully-connected, ReLU everywhere (blog post) + data augmentation.
  • 0.1292 (0.144, 0.1209) : 221x221 inputs, 5 conv/pooling + 2 fully-connected, ReLU everywhere (blog post).
  • 0.1532 (0.1318, 0.1312) : 124x124 inputs, 3 conv/pooling + 3 fully-connected, ReLU everywhere (blog post).
  • 0.1532 (0.10885, 0.1684): 4 convolution + 2 fully-connected + softmax, adadelta, tuning (blog post)
  • 0.1536 (0.1485, 0.1712) : 4 convolution + 2 fully-connected, RELU everywhere (blog post)
  • 0.1824 (0.1451, 0.1724) : 5 convolution + 3 fully-connected, ReLU everywhere (blog post)
  • 0.1835 (0.1088, 0.1831) : 5 convolution + softmax, RELU everywhere (blog post)
  • 0.1992 (0.1695 0.1828) : 5 convolution + 3 fully-connected, ReLU everywhere (blog post)
Clone this wiki locally