Skip to content
A PyTorch Implementation of Neural IMage Assessment
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore add gitignore Feb 17, 2018 minor fix Aug 1, 2019 update Aug 1, 2019
requirements.txt add requirements Feb 17, 2018 index fix Aug 16, 2019

NIMA: Neural IMage Assessment

This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment by Hossein Talebi and Peyman Milanfar. You can learn more from this post at Google Research Blog.

Implementation Details

  • The model was trained on the AVA (Aesthetic Visual Analysis) dataset, which contains roughly 255,500 images. You can get it from here. Note: there may be some corrupted images in the dataset, remove them first before you start training.

  • I split the dataset into 229,981 images for training, 12,691 images for validation and 12,818 images for testing.

  • I used a VGG16 pretrained on ImageNet as the base network of the model, for which I got a ~0.075 EMD loss on the 12,691 validation images. Haven't tried the other two options (MobileNet and Inception-v2) in the paper yet. # TODO

  • The learning rate setting differs from the original paper. I can't seem to get the model to converge with momentum SGD using an lr of 3e-7 for the conv base and 3e-6 for the dense block. Also I didn't do much hyper-param tuning therefore you could probably get better results. Other settings are all directly mirrored from the paper.

  • The code now only supports python3.


  • Set --train=True and run python to start training. The average training time for one epoch with --batch_size=128 is roughly 1 hour on a Titan Xp GPU. For evaluation, refer to for usage.

  • I found a very handy tool to monitor training in PyTorch in real time. You can check it out on how to use it. Remember do pip install lera first if you are inclined to use it.

Training Statistics

Training is done with early stopping monitoring. Here I set patience=5. loss

Pretrained Model

Google Drive

Annotation CSV Files

Train Validation Test

Example Results

  • Here shows the predicted mean scores of some images from the validation set. The ground truth is in the parenthesis.

  • Also some failure cases...

  • The predicted aesthetic ratings from training on the AVA dataset are sensitive to contrast adjustments. Below images from left to right in a row-major order are with progressively sharper contrast. Upper rightmost is the original input.


  • PyTorch 0.4.0+
  • torchvision
  • numpy
  • Pillow
  • pandas (for reading the annotations csv file)
You can’t perform that action at this time.