Skip to content
Code to generate visual metamers via foveated feed-forward style transfer (ICLR 2019)
Lua Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Dataset
lib
sample_gifs
util
NeuroFovea.lua
README.md
download_models_and_stimuli.sh

README.md

Towards Metamerism via Foveated Style Transfer

This repository containts the code to reproduce the Metamers used in the paper (Deza, Jonnalagadda, Eckstein. ICLR 2019). Link to the paper and discussion in openreview: https://openreview.net/forum?id=BJzbG20cFQ

This code has been tested successfully on CUDA version 8.0 (Ubuntu 14.04 and 16.04) and CUDA version 10.0 (Ubuntu 18.04).

The code to implement our model is mainly driven by:

What is a Metamer?

Metamers are a set of stimuli that are physically different but perceptually indistinguishable to each other. See below for an example.

Input Metamer

When maintaing center fixation on the orange dot the two images that are flipped back and forth should be perceptually indistinguishable to each other even though they are physically different (strong difference in the periphery vs the fovea).

Rendering Metamers by varying receptive field size

Reference vs Synthesis Metamers (V1) Synthesis vs Synthesis Metamers (V2)
Left: we have a metamer that is metameric to its reference image. The rate of growth of the receptive fields on of the rendered metamer resembles the size of receptive fields of neurons in V1. Right:, we have two images that are heavily distorted in the visual periphery, are not metameric to the reference image, but are metameric to each other (perturbed with differente noise samples). The rate of growth of these receptive fields correspong to the sizes of V2 neurons, where it is hypothesized that the ventral stream is sensitive to texture.

As in our previous demo, the metameric effects will only work properly if one fixates at the orange dot at the center of the image. In the paper we provide more details on how we psychophysically tested this phenomena using an eye-tracker to control for center fixation, viewing distance, display time, and the visual angle of the stimuli. We tested our model on grayscale images, and have extended the model in this code release to color images.

Installation and pre-requisites for code functionality:

It was developed in CUDA 8.0 on Ubuntu 16.04 and has been tested on both CUDA 8.0 and CUDA 10.1 (though there might be some differences from CUDA 10.1 to 8.0) on Ubuntu 18.04. You will need to install:

CUDA 10.1

CUDNN 7.5.1

Torch (Lua)

Note: We are currently working on a pytorch re-implementation of our Metamer model. If you have one please let us know so we can post a link to your repo here as well.

The Full Dataset is also available here for future work in both grayscale and color Metamers, they can be found in the Datasets/ folder

To complete the installation please run:

$ bash download_models_and_stimuli.sh

Example code:

Generate a V1 metamer for the 512x512 image 10.png with a center fixation, specified by the rate of growth of the receptive field: s=0.25. Note: The approximate rendering time for a metamer should be around a second.

$ th NeuroFovea.lua -image Dataset/1_color.png -scale 0.25 -refinement 1 -color 1

To create a V2 metamer, change the scale from 0.25 to 0.5. Scale is computed via receptive field size over retinal eccentricity of that receptive field and the values are only relevant given the size of the stimuli (26 x 26 degrees of visual angle rendered at 512 x 512 pixels). To compute the reference image, set the reference flag to 1.

Please read our paper to learn more about visual metamerism: https://openreview.net/forum?id=BJzbG20cFQ

We hope this code and our paper can help researchers, scientists and engineers improve the use and design of metamer models that have potentially exciting applications in both computer vision and visual neuroscience.

This code is free to use for Research Purposes, and if used/modified in any way please consider citing:

@inproceedings{
deza2018towards,
title={Towards Metamerism via Foveated Style Transfer},
author={Arturo Deza and Aditya Jonnalagadda and Miguel P. Eckstein},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=BJzbG20cFQ},
}

Other inquiries: arturo_deza@fas.harvard.edu

You can’t perform that action at this time.