Skip to content
Go to file

A Multi-Hypothesis Approach to Color Constancy

Daniel Hernandez-Juarez, Sarah Parisot, Benjamin Busam, Ales Leonardis, Gregory Slabaugh and Steven McDonagh

CVPR, 2020

paper / poster / code / supplement / video / blog post


Contemporary approaches frame the color constancy problem as learning camera specific illuminant mappings. While high accuracy can be achieved on camera specific data, these models depend on camera spectral sensitivity and typically exhibit poor generalisation to new devices. Additionally, regression methods produce point estimates that do not explicitly account for potential ambiguities among plausible illuminant solutions, due to the ill-posed nature of the problem. We propose a Bayesian framework that naturally handles color constancy ambiguity via a multi-hypothesis strategy. Firstly, we select a set of candidate scene illuminants in a data-driven fashion and apply them to a target image to generate a set of corrected images. Secondly, we estimate, for each corrected image, the likelihood of the light source being achromatic using a camera-agnostic CNN. Finally, our method explicitly learns a final illumination estimate from the generated posterior probability distribution. Our likelihood estimator learns to answer a camera-agnostic question and thus enables effective multi-camera training by disentangling illuminant estimation from the supervised learning task. We extensively evaluate our proposed approach and additionally set a benchmark for novel sensor generalisation without re-training. Our method provides state-of-the-art accuracy on multiple public datasets up to 11% median angular error improvement while maintaining real-time execution.

A Multi-Hypothesis Approach to Color Constancy Video

Required hardware

We tested this on a Nvidia Tesla V100 with 32 GB of memory. You can reduce the batch size in the json of every experiment, but results could be different.

Dataset preprocessing

To make reproducing our work easier, we have created some scripts in the folder "cc_data". Please, check the script in each folder for instructions on what to download and run the script to preprocess the dataset.

Install required packages

You can use the "Dockerfile" included to make sure all the needed packages are installed. Alternatively, we provide a requirements.txt to install required packages with pip (pip install -r requirements.txt).

Reproducing paper experiments

(Table index matches arXiv paper version)

In order to run the paper experiments, use "bash ./experiments/":

Table Script
Table 1: Ours
Table 1: Ours (pretrained)
Table 2: Ours
Table 2: Ours (pretrained)
Table 3: OMPD: FFCC
Table 3: MDT: FFCC
Table 3: OMPD: Ours (pretrained)
Table 3: MDT: Ours (pretrained)
Table 4: Ours
Table 4: Ours (pretrained)
Table 7: all rows
Table 8: Ours
Table 8: all rows

If you want to run other experiments, here's the way of using the cross-validation, hold-out and inference scripts:

Cross-validation training

python3 EXPERIMENT.json DATASET.txt --outputfolder /PATH -gpu GPU_ID

Note that if you don't set the -gpu you'll be using CPU.

Hold-out training


Inference for a single file list


Inference for a dataset (all folds)

python3 EXPERIMENT.json DATASET.txt ./PATH_CHECKPOINT --outputfolder /PATH -gpu GPU_ID

You can’t perform that action at this time.