Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Image Recoloring Based on Object Color Distributions

Mahmoud Afifi1, Brian Price2, Scott Cohen2, and Michael S. Brown1

1York University 2Adobe Research

Project page

main figure


We present a method to perform automatic image recoloring based on the distribution of colors associated with objects present in an image. For example, when recoloring an image containing a sky object, our method incorporates the observation that objects of class 'sky' have a color distribution with three dominant modes for blue (daytime), yellow/red (dusk/dawn), and dark (nighttime). Our work leverages recent deep-learning methods that can perform reasonably accurate object-level segmentation. By using the images in datasets used to train deep-learning object segmentation methods, we are able to model the color distribution of each object class in the dataset. Given a new input image and its associated semantic segmentation (i.e., object mask), we perform color transfer to map the input image color histogram to a set of target color histograms that were constructed based on the learned color distribution of the objects in the image. We show that our framework is able to produce compelling color variations that are often more interesting and unique than results produced by existing methods.

Quick start

View Image recoloring without a target image on File Exchange

  1. Run install_p1
  2. Run install_p2
  3. Go to the demo directory and copy your input images to input_images directory
  4. Run demo_recoloring
  5. The recolored images will be in the recolored_images directory and the generated semantic masks will be in output_masks directory.
  6. Run demo_GUI for our interactive GUI version.

Manual installation

  1. Install RefineNet for semantic segmentation.
  2. Download the trained model for ADE20k dataset. In our experiments, we used the ResNet-152 model.
  3. Create a directory and name it SS_CNN. This directory should contain the RefineNet direcotry after installing RefineNet and MatConvNet (prerequisite for RefineNet). For example, the read me file of RefineNet should be located in the following path SS_CNN/RefineNet/
  4. Use the following matlab code to add paths for our sub-directories:
       current = pwd;
       addpath([current '/colour-transfer-master']);
       addpath([current '/cp']);
       addpath([current '/emd']);
       addpath([current '/getMask']);
       addpath([current '/recoloring']);
       addpath([current '/general']);
  1. Compile mex files for the Earth Mover's Distance (EMD) files located in the emd directory. Use the following Matlab code:
        mex EMD1.cpp
        mex EMD2.cpp
        mex EMD3.cpp

Be sure that you select MinGW for C++ Mex file compiling. To change it, use the following Matlab command:

        mex -setup C++
  1. Download Scene Parsing dataset (we only use the training set which includes training images/semantic masks). The dataset should be located in the following path ../ADEChallengeData2016 (assuming you are located in the root directory of our source code). For example, you should be able to read the first training image ADE_train_00000001.jpg and its semantic mask by writing the following Matlab code:
        I = imread(fullfile('..','ADEChallengeData2016','images','training','ADE_train_00000001.jpg'));
        M = imread(fullfile('..','ADEChallengeData2016','annotations','training','ADE_train_00000001.png'));
  1. Download our pre-computed data that includes the distribution of object color distributions (DoD) from here (also, is available here). Make sure that you locate the DoD data in the following path ../data/DoD_data (assuming you are located in the root directory of our source code). For example, you should be able to load the first cluster data by writing the following Matlab code:


GUI_DoD Try our GUI version which includes the following features:

  1. Semantic mask adjustment: You can adjust the semantic mask in an interactive way (semi-automated and manual adjustments are provided).
  2. Selecting primary object: You can select the primary object to get different results.

To test it, run demo_GUI from the demo directory.


If you use this code, please cite our paper:

Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S. Brown, Image Recoloring Based on Object Color Distributions, Eurographics 2019 - Short Papers, 2019

@inproceedings {afifi2019imageRecoloring,
booktitle = {Eurographics 2019 - Short Papers},
title = {{Image Recoloring Based on Object Color Distributions}},
author = {Afifi, Mahmoud and Price, Brian and Cohen, Scott and Brown, Michael S.},
year = {2019},
publisher = {The Eurographics Association},
ISSN = {1017-4656},
DOI = {10.2312/egs.20191008}

Related Research Projects

  • sRGB Image White Balancing:
  • Raw Image White Balancing:
    • APAP Bias Correction: A locally adaptive bias correction technique for illuminant estimation (JOSA A 2019).
    • SIIE: A sensor-independent deep learning framework for illumination estimation (BMVC 2019).
    • C5: A self-calibration method for cross-camera illuminant estimation (arXiv 2020).
  • Image Enhancement:
    • CIE XYZ Net: Image linearization for low-level computer vision tasks; e.g., denoising, deblurring, and image enhancement (arXiv 2020).
    • Exposure Correction: A coarse-to-fine deep learning model with adversarial training to correct badly-exposed photographs (CVPR 2021).
  • Image Manipulation:
    • MPB: Image blending using a two-stage Poisson blending (CVM 2016).
    • Image Relighting: Relighting using a uniformly-lit white-balanced version of input images (Runner-Up Award overall tracks of AIM 2020 challenge for image relighting, ECCV Workshops 2020).
    • HistoGAN: Controlling colors of GAN-generated images based on features derived directly from color histograms (CVPR 2021).


No releases published


No packages published