Skip to content

a baseline for baidu dog classification competition.

Notifications You must be signed in to change notification settings

ahangchen/keras-dogs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fine grain Dog Classification held by Baidu

author: cweihang

Star this repository if you find it helpful, thank you.

Language: English/简体中文

About

This is a dog classification competition held by Baidu. Competition URL: http://js.baidu.com/

Framework

Hardware

  • Geforce GTX 1060 6G
  • Intel® Core™ i7-6700 CPU
  • Memory 8G

Model

Structure of Xception

Implemented in Keras

  • Remove the final classify dense layer in Xception to get the deep feature
  • Input two images, containing same or different labels
  • Train the model with categorical loss of two images and their class labels
  • Train the model with binary loss meaning whether two images belong to same class or not

Data pre-process

Previous link was provided by Baidu and only Baidu has the rights to spread the data, thus I can not provide the data anymore if Baidu canceled the link shared. You can refer to http://dianshi.baidu.com/gemstone/competitions/detail?raceId=17 with another newly published dataset on which you can apply the same algorithm. You can also apply this model on Person ReID dataset Market1501. But you should do some processing on the data. You can refer to part of our CVPR paper code: rank-reid

  • Place the images with the same class into same directory, for using ImageDataGenerator.
  • Because I named the images with the format "typeid_randhash.jpg", I wrote img2keras.py for the work described above.
  • There are more details to handle. If you meet any error, refer the Keras document first. If you still have some question, you can create an issue.

Training

  • Use ImageDataGenerator for data argumentation
  • It's hard to find positive samples for binary training using ImageDataGenerator, because the samples are shuffled. Looking throughout the data set for positive samples is inefficient. Fortunately, in each batch, we can find some samples with the same class. So we simply swap those samples to construct positive samples.
  • Frozen the Xception CNN layers, train the full connected layers for category and binary classification with ADAM
  • Unfroze final two blocks(layer 105 to the end) of Xception, continue training with SGD
  • Remove data argumentation, retrain until converge

Code

Result

  • InceptionV3, softmax loss: 0.2502
  • Xception, softmax loss: 0.2235
  • Xception, multi loss: 0.211
  • Xception, multi loss, fine tune without data argumentation: 0.2045

If you find some bug in this code, create an issue or a pull request to fix it, thanks!

About

a baseline for baidu dog classification competition.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages