NIMA: Neural IMage Assessment
The model was trained on the AVA (Aesthetic Visual Analysis) dataset, which contains roughly 255,500 images. You can get it from here. Note: there may be some corrupted images in the dataset, remove them first before you start training.
I split the dataset into 229,981 images for training, 12,691 images for validation and 12,818 images for testing.
I used a VGG16 pretrained on ImageNet as the base network of the model, for which I got a ~0.075 EMD loss on the 12,691 validation images. Haven't tried the other two options (MobileNet and Inception-v2) in the paper yet. # TODO
The learning rate setting differs from the original paper. I can't seem to get the model to converge with momentum SGD using an lr of 3e-7 for the conv base and 3e-6 for the dense block. Also I didn't do much hyper-param tuning therefore you could probably get better results. Other settings are all directly mirrored from the paper.
The code now only supports python3.
python main.pyto start training. The average training time for one epoch with
--batch_size=128is roughly 1 hour on a Titan Xp GPU. For evaluation, refer to
I found https://lera.ai/ a very handy tool to monitor training in PyTorch in real time. You can check it out on how to use it. Remember do
pip install lerafirst if you are inclined to use it.
Annotation CSV Files
- Here shows the predicted mean scores of some images from the validation set. The ground truth is in the parenthesis.
- Also some failure cases...
- The predicted aesthetic ratings from training on the AVA dataset are sensitive to contrast adjustments. Below images from left to right in a row-major order are with progressively sharper contrast. Upper rightmost is the original input.
- PyTorch 0.4.0+
- pandas (for reading the annotations csv file)