Transfer Learning for Anime Characters
Warning: This repository size is quite big (approx. 100 MB) since it includes training and test images.
This repository is the continuation of Flag #15 - Image Recognition for Anime Characters.
In Flag #15, we can see that Transfer Learning works really well with 3 different anime characters: Nishikino Maki, Kotori Minami, and Ayase Eli.
In this experiment, we will try to push Transfer Learning further, by using 3 different anime characters which have hair color similarity: Nishikino Maki, Takimoto Hifumi, and Sakurauchi Riko.
This experiment has 3 main steps:
lbpcascade_animefaceto recognize character face from each images
- Resize each images to 96 x 96 pixels
- Split images into training & test before creating the final model
raw directory contains 36 images for each characters (JPG & PNG format). The first 30 images are used for training while the last 6 images are used for test.
As an example, we got the following result after applying Step 1 (
cropped directory is shown at the right side):
lbpcascade_animeface can detect character faces with an accuracy of around 83%. Failed images are stored in
raw (unrecognized) for future improvements.
Since we have 3 characters and 6 test images for each which are not part of training,
resized_for_test contains 18 images in total. Surprisingly, almost all characters are detected properly!
Update (Nov 13, 2017): See
animeface-2009 section below, which push face detection accuracy to 93%.
- The following command is used to populate
$ python bulk_convert.py raw/[character_name] cropped
- The following command is used to populate
$ python bulk_resize.py cropped/[character_name] resized
After running the step above, you can decide how many images will be used in
resized_for_training and how many images will be used in
- Re-train the Inception model by using transfer learning:
$ bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/transfer-learning-anime/resized_for_traning/ $ bazel build tensorflow/examples/image_retraining:label_image
- At this point, the model is ready to use. We can run the following command to get the classification result:
$ bazel-bin/tensorflow/examples/image_retraining/label_image --graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt --output_layer=final_result:0 --image=$HOME/transfer-learning-anime/resized_for_test/[character name]/[image name]
If everything works properly, you will get the classification result. See TensorFlow Documentation for more options.
Optionally, sample model can be downloaded by running
download_model.sh script inside
models (example) directory.
Initially, we run the experiment with 2 characters: Nishikino Maki and Takimoto Hifumi.
INFO:tensorflow:2017-11-10 08:50:36.151387: Step 3999: Train accuracy = 100.0% INFO:tensorflow:2017-11-10 08:50:36.151592: Step 3999: Cross entropy = 0.002191 INFO:tensorflow:2017-11-10 08:50:36.210147: Step 3999: Validation accuracy = 100.0% (N=100) INFO:tensorflow:Final test accuracy = 92.9% (N=14)
The result is as the following:
From the result above, 10 out of 12 have threshold > 0.95, while the lowest threshold is 0.63.
At this point, I decided to add Sakurauchi Riko, which is known for its similarity to Nishikino Maki.
INFO:tensorflow:2017-11-10 13:13:59.270717: Step 3999: Train accuracy = 100.0% INFO:tensorflow:2017-11-10 13:13:59.270912: Step 3999: Cross entropy = 0.005526 INFO:tensorflow:2017-11-10 13:13:59.328139: Step 3999: Validation accuracy = 100.0% (N=100) INFO:tensorflow:Final test accuracy = 80.0% (N=15)
With 3 similar characters, the result is as the following:
As you can see above, the similarity between Nishikino Maki and Sakurauchi Miko starts to lower down the confidence level of the resulted model. Nevertheless, all classifications are still correct, where 4 out of 6 maintain the threshold of > 0.95.
Interestingly, the addition of 3rd character increases the confidence level of several Takimoto Hifumi testcases (see 1st and 4th result). Overall, this character can be easily differentiated compared to the other two.
From this experiment, it seems that the current bottleneck is located at Step 1 (face detection), which have the overall accuracy of 83% in face detection.
nagadomi/animeface-2009 provides another method of face detection. 13 out of 21 unrecognized images are now recognized in
cropped (unrecognized) directory.
Current found limitations: it seems the script requires more memory and slower to run compared to
Since this method gives better result in detecting anime character face and classification still works with almost the same result, the overall face detection accuracy is now around 93%.
is created by nagadomi/lbpcascade_animeface.
Copyright for all images are owned by their respective creators.