Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 

What is a Texform?

Input Texform

Texforms are images that preserve the coarse shape and texture information of objects using a modified texture synthesis algorithm of Freeman & Simoncelli, 2011.

Long et al., conducted behavioral experiments to select texfroms that are unrecoganizable at the basic level. at the basic level (Long, Yu & Konkle, 2018), thus enabling one to test whether any visual processes depend on explicit recognition or can rely on more primitive mid-level features.

However, to generate these images, the current implementation and computational complexity of the model requires approximately 4-24 hours per object -- a significant hurdle for experiments that require large number of stimuli.

This repository has code to generate texforms in minutes. This algorithm is implementationally equivalent to Long et al., 2018 (in terms of the first and second order image statistics that are preserved), but is both faster and can generate higher resolution images.

Download and Install depending packages [Linux/MAC]

This code depends on the following modules:

Freeman and Simoncelli Metamer model

Steerable Pyramid Toolbox

On a linux/MAC system, you can install these directly by running the following script:

bash dowload_dependencies.sh

Run code demo to generate a sample Texform:

fast_texform.m 

Texform Variations

The algorithm used to generate texforms has a number of parameters that yeild texform variations that may be of theoretical interest (e.g. by preserving more or less of the spatial information, which generally renders the stimuli more or less recognizable).

These involve: 1- simiulating how far out in the periphery is the object placed (i.e. vary point of fixation) 2- changing the rate of growth of the receptive field (i.e. log polar pooling windows) Note these variations have similar consequnces, and are depicted below.

Varying the point of Fixation

Center Fixation Side Fixation Out of Image Fixation

Varying the rate of Growth of Receptive Field Size (scaling factor)

Low Scaling Factor (s=0.3) Medium Scaling Factor (s=0.5) High Scaling Factor (s=0.7)

How is the Fast-texform method different from Long et al., 2018?

In the original method, stimuli were placed in at small size in a gray image, and the whole image was synthesized assuming a central point of fixation. This method places the original image fully in the display, and synthesizes based on a point of fixation that is off the image. By doing so, the resulting texform is not only higher resolution, but the algorithm is also faster because there's many fewer pooling regions overall. However, computationally, it is the exact same algorithm. The original and current methods are depicted below.

Here is the code for the previous Texform generation model as used in Long, Yu & Konkle, 2018. A more detailed explanation of what is a Texform can also be accessed here.

Citation

@inproceedings{
deza2019accelerated,
title={Accelerated Texforms: Alternative Methods for Generating Unrecognizable Object Images with Preserved Mid-Level Features},
author={Arturo Deza and Yi-Chia Chen and Bria Long and Talia Konkle},
booktitle={Cognitive Computational Neuroscience (CCN) 2019},
year={2019},
}

About

Code database for Fast Texform generation as proposed in the work of Deza, Chen, Long and Konkle (CCN 2019).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published