Skip to content
Jennifer Maxwell edited this page Aug 28, 2019 · 7 revisions

To help you use ISETBio and other ISET* repositories, we include many tutorials (both m-files and live scripts). This page recommends basic m-file tutorials to help you learn about the key ISETBio objects: Scene, Optics, Cones, and Eye Movements.

The tutorials listed on this page were selected to guide you in the beginning. Explore the Contents.m files in the tutorials folders for quick descriptions of what these and many other tutorials cover. For each topic, the tutorial named t_(topic)Introduction is a good place to start.

Scene

t_sceneIntroduction

ISETBIO is structured around a few key objects. Conceptually, the first two are the scene (spectral radiance) and the optical image (spectral irradiance at the retina). The next objects are the cone mosaic, the bipolar layer, and then the retinal ganglion cell layer. This script illustrates the ISETBIO scene and describes how to program with the ISETBIO style. Using ISETBIO objects properly will make your code and analysis much easier to understand and maintain. For scene and oi representations there are six fundamental operations:

Create
1. Set Parameters
2. Get Parameters
3. Compute
4. Plot
5. A Window Function.

This tutorial demonstrates how to setup a simple ISET scene, which describes a spectral radiance field. For the this introductory tutorial we work with a simple planar radiance image, such as the image on a display surface. Going through this tutorial will introduce you to some of the get, set, and plot commands of the scene object.

t_sceneRGB2Radiance

This very short tutorial demonstrates how to create an ISET scene for a given RGB image using a specified display. The approach is to simulate displaying the image on a monitor and then converting the display spectral radiance into an ISET scene object. For a more in-depth look at displays, see the next tutorial.

t_displayIntroduction

This script demonstrates how to create a display object. It illustrates how to set, get and plot display parameters such as the gamma table, the white point, or the display primaries. Lastly, we use the display and a JPG image to create a ISET scene object. This ISET scene models the scene radiance emitted from the display for a specific image.

Optics

The ISET "optical image" or OI describes the irradiance field at the retinal surface. It is computed from the scene (radiance), and the optics. The optics object is specified in a slot within the optical image structure.

A note on ISETCam v. ISETBio

ISETBio is a variation of an older toolbox called ISETCam (historically called ISET). While ISETBio models the human eye from scene radiance to cone absorptions, ISETCam models cameras and imaging systems from scene radiance to raw sensor data and beyond. We note this here, because although ISETBio and ISETCam differ slightly throughout, they diverge significantly from one another when modeling the optics.

Some of the optical image tutorials were originally from ISETCam. As a result, you may see references to camera lens terminology when browsing through the scripts in t_optics. These ISETCam tutorials are still relevant because of the overlap between human optics and camera optics. Here we list only human optics tutorials.

t_humanLineSpreadOI.m

This short tutorial calculates the retinal spectral irradiance of a line stimulus. We use an estimated human optics model from Marimont and Wandell in this tutorial. We use more recent wavefront aberration data in the next tutorial.

t_wvfZernickeSet

This tutorial calculates the retinal irradiance of a scene using wavefront aberration data. As shown in the script, the wavefront starts off as its own object, but is eventually "pushed" into the optical image. We also demonstrate how to adjust Zernicke coefficients after creating the wavefront object. The effects of changing defocus and adding astigmatism can be seen on the retinal irradiance.

t_wvfThibosModel

Next, we go into more detail on wavefront aberration data. This tutorial shows you how to load aberration data from Larry Thibos' statistical model and plots several PSF's from one random individual generated from the model. In a manner similar to the previous tutorial, wavefront objects created with the Thibos data can also be converted into an optical image and used to calculate the retinal irradiance from a scene.

Cones

Next, we show how to calculate the cone excitations corresponding to any optical image. ISETBio has two types of cone mosaics: a rectangular mosaic and a hex mosaic. The former has cones laid out in a rectangular grid with a fixed density for each cone type. The latter has cones that change aperture size, overall density, and relative density with eccentricity. The hex mosaic is physiologically more accurate, but is computationally expensive to generate.

t_conesMosaicBasic

This short tutorial script demonstrates how to create a cone mosaic object and compute cone isomerizations across a set of small eye movements. You can see the results of this calculation in the coneMosaic window. From the drop down tab on the top of the Cone Mosaic window you can switch between the mosaic (colored red, green and blue for the cone types), the mean number of absorptions over the current integration time, and a movie of the absorptions over the generated fixational eye movements.

t_conesMosaicHex

This tutorial demonstrates how to generate and use a hexagonal mosaic with an eccentricity based cone spacing including an S-cone free region, and a desired S-cone spacing. Because the hex mosaic is significantly more complex than the rectangular mosaic, we introduce a large number of parameters to set and customize the mosaic. You can find short definitions for all parameters by running edit coneMosaicHex.m. Near the end, the tutorial shows how to compute the isomerizations for this mosaic to a simple stimulus.

Eye Movements

ISETBio can generate fixational eye movements that can be applied when calculating cone isomerizations. Eye movements are created in their own class (fixationalEM) and eye positions are generated using the class' compute function. If the sequence of positions were generated for a cone mosaic, they can be slotted into the emPositions parameter in the cone mosaic object. When running the cone mosaic's compute function with an optical image, the resulting absorptions have a temporal dimension corresponding to the sampled positions during the eye movement.

The following tutorials explore eye movements in more detail.

t_fixationalEM

This tutorial demonstrates how to generate fixational eye movements. It first generates 50 different eye movements sequences, each over the specified duration and temporal sampling. These 50 eye movements are displayed in a figure. Next, the eye movements are re-generated and re-plotted, but this time spatially sampled on a cone mosaic. Lastly, one of the eye movement sequences is chosen and used to generate a movie of cone excitations from a simple example scene.

t_fixationalEyeMovementsTypes

This analysis script shows the difference between eye movement paths generated using different 'microSaccadeType' parameters. The fixation maps are slightly different for microsaccades that are generated using the 'heatmap/fixation based' strategy vs. the 'stats based' strategy. The heatmap/fixation strategy results in fixation maps that are a bit wider along the horizontal and vertical axes.

t_fixationalEyeMovementsCharacterize

This analysis script computes key characteristics of emPaths and explores how these differ for different micro-saccade strategies the @fixationalEM object. This function can take a while to run. You can flip useparfor to true and speed up the calculation - if your machine is set up to run in that mode. To do that, your startup must execute without a response! The examined characteristics of the emPaths are:

  • velocity
  • fixation span
  • power spectal density
  • displacement analysis

oiSequences

t_oiSequence

This is a demonstration of the oi sequence class... An oiSequence describes a dynamic retinal image, essentially a retinal image video. The oiSequence is not a general video, but it applies to the case in which there is one basic stimulus that is either mixed with a background or whose contrast is scaled over time. Eye movements are also included. This simplification enables us to compute many psychophysical stimuli efficiently; but it is not completely general.

t_oisCreate

The oisCreate function produces some classic psychophysical stimuli as oiSequences. This script shows how to use that function. The script creates Gaussian envelope harmonic (Gabor) oiSequences for both monochrome and color, Vernier stimuli, and a flash. Implementing directly with the oiSequence() class is illustrated in the script t_oiSequence.m

outerSegment

t_linearFilters

Computes L-, M- and S-cone outer segment photocurrent responses to luminance step stimuli of fixed height presented on different backgrounds. Visualizes isomerization responses, outer segment impulse responses and outersegment photocurrent responses. This tutorial mainly shows how the background luminance (adapting stimulus) affects the cone outer-segment linear impulse response (luminance adaptation) and, thus, the ensuing cone photocurrent responses, and how this adaptation depends on cone type.

t_osLinearize

Illustrates foveal and peripheral cone outer segment temporal impulse responses using the osBioPhys object at different mean intensities. This has a some overlap with t_osFoveaPeriphery. The difference is that this calls through the os linearFilters methods, whereas t_osFoveaPeriphery exposes the underlying calls to the osBioPhys object. In t_osFoveaPeriphery the raw response to an increment in the dark is shown. Here the impulse response starts at zero and shows the differential response to a stimulus on a steady background. Note the different current scales for foveal and peripheral responses. The foveal response is larger and slower.

Wavefront

t_wvfHuman

Models the retinal image from a flat grid scene using both the wavefront methods and the Marimont and Wandell model.

t_wvfPlot

Illustrate a number of ways to create plots of the wavefront structure using the wvfPlot call.

t_wvfZernike

This detailed, teaching tutorial explains a method of representing the wavefront aberration function using a set of functions known as Zernike polynomials. The wavefront aberration function models the effect of the human cornea, lens, and pupil on the optical wavefront propagating through them. Absorption is modeled by an amplitude < 1, and phase aberrations are modeled by a complex phasor of the form exp(i * 2 * pi * [summation of Zernike polynomials] / wavelength) From Fourier optics, the eye's point spread function (PSF) can be computed from the wavefront aberration function, or pupil function, by taking the Fourier transform: PSF = fft2(pupil function). We tend to do this through the function PsfToOtf, so that we can keep all the fftshift and ifftshift information consistent across routines. See also OtfToPsf. The Zernike polynomials form an orthogonal basis set over a unit disk. They are useful because they can isolate aberrations into separate components, each of which is given a weight and has potential for being corrected. For example, rather than seeing an entire aberrated wavefront, we can instead look at the amount of astigmatism in the 45 degree direction and how it contributes to the PSF on its own by knowing the measured Zernike coefficient associated with it. The tutorial:

  • Introduces the concept of Zernike polynomials
  • Shows pupil function & how it is formed using Zernike polynomials
  • Shows associated point-spread functions for given pupil functions
  • Demonstrates and explains longitudinal chromatic aberration
  • Demonstrates and explains Stiles-Crawford effect
  • Looks at measured human data and shows how eyeglasses only allow us to correct certain wavefront aberrations. The fact that for an aberrated eye, the best optical quality does not occur when nominal defocus wl matches the calculated wavelength is not considered here, but can be quite important when thinking about real optical quality. An interesting extension of this tutorial would be to use a figure of merit for the optical quality (e.g., the Strehl ratio) and show how it varies as a function of defocus.
Clone this wiki locally