Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ _site/
.sass-cache/
.lycheecache
**/.DS_Store
docs/screenshots/
87 changes: 87 additions & 0 deletions _data/research.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Research projects, rendered by research.html as a card grid.
#
# Each entry:
# title: string (required)
# image: string (optional, repo-relative path to the
# primary thumbnail shown atop the card)
# caption: string (optional, caption text for the
# thumbnail; HTML/markdown allowed)
# summary: string (required, one-sentence pitch shown on
# the closed card; markdown links OK)
# body: string (required, markdown source for the
# full description revealed via the
# "Read more" disclosure; supports
# inline links and <figure>/<img> tags)

- title: Real-time analysis of neural data
image: /images/colorFish.png
caption: >-
A whole zebrafish brain activity map, showing the distribution of
motion-sensitive neurons, color-coded to show the preferred motion
direction.
summary: >-
A software platform ([improv](https://github.com/project-improv/improv)) for designing and orchestrating
adaptive experiments — analyzing neural data in real time to measure,
model, and manipulate brain activity, with
[Eva Naumann's lab](https://www.neuro.duke.edu/research/faculty-labs/naumann-lab).
body: |
Together with [Eva Naumann's](https://www.neuro.duke.edu/research/faculty-labs/naumann-lab) lab, we've developed *[improv](https://github.com/project-improv/improv)* ([paper](https://www.biorxiv.org/content/10.1101/2021.02.22.432006v1)), a software platform for designing and orchestrating adaptive experiments. By analyzing data in real time, we can measure, model, and manipulate neural activity in response to new data. We've shown how these tools, in conjunction with holographic photostimulation, could in principle map functional connectivity of large circuits in a few hours ([paper](https://proceedings.nips.cc/paper/2020/file/531d29a813ef9471aad0a5558d449a73-Paper.pdf), [expanded version](https://arxiv.org/abs/2007.13911)). More recently, we've worked on methods for fast dimensionality reduction and modeling of neural populations in real time ([paper](https://arxiv.org/abs/2108.13941)).

<figure>
<img src="/images/pipelineNewpng3.png" class="img-responsive" alt="Closed-loop pipeline concept">
<figcaption>
Concept for the closed-loop pipeline. Neural data from the zebrafish are collected in the form of images, preprocessed, and analyzed in real-time. Targets for optical stimulation are then chosen based on the results of this analysis, creating adaptive experiments that test causal hypotheses.
</figcaption>
</figure>

- title: Animal vocalizations
image: /images/vae_finch.png
caption: >-
a. Syllable VAE: Segmented syllables from adult Zebra Finch song are
projected to a low-dimensional space, then reconstructed from that
space using a VAE. b. Shotgun VAE: The VAE is trained on 20ms
segments of adult Zebra Finch song to model variability on a
millisecond duration. Visualized are songs projected into the latent
space using these shorter segments.
summary: >-
Variational autoencoders for dimensionality reduction of
vocalizations in mice, finches, and marmosets — a unified,
species-agnostic view of vocal variability and learning, with
[Richard Mooney's lab](https://www.neuro.duke.edu/mooney-lab).
body: |
Vocalization is a complex behavior that underlies vocal communication and vocal learning, and is important for the study of humans' underlying linguistic competency and musicality. Despite its prominence in a wide range of disciplines, vocalizations are often quantified in an ad-hoc and species-specific manner. Fortunately, recent advances in machine learning have resulted in techniques that allow high-dimensional data to be compressed in a data-dependent manner, resulting in low-dimensional encodings that minimize information loss. We use one such method, the variational Bayesian autoencoder (VAE), to perform dimensionality reduction of the vocalizations and vocal learning behavior of several model organisms: laboratory mice, zebra finches, and marmosets. Together with [Richard Mooney's lab](https://www.neuro.duke.edu/mooney-lab) we use latent representations of these species' vocal behavior to reproduce and extend existing results in a species-agnostic manner, offering a unified view of vocal variability and learning on timescales ranging from individual syllables of millisecond duration to across days ([paper](https://elifesciences.org/articles/67855), [paper](https://elifesciences.org/articles/63493)).

- title: Efficient coding in the retina
summary: >-
Using efficient-coding theory to explain how retinal ganglion cells
trade information against metabolic cost — and what changes when the
number of available neurons varies, with
[Greg Field's lab](https://www.neuro.duke.edu/research/faculty-labs/field-lab).
body: |
How does the retina, which receives roughly one gigabit per second of visual information, compress that into something small enough to transmit down an optic nerve with a capacity of one megabit per second &mdash; three orders of magnitude lower? One answer, proposed by Horace Barlow half a century ago, is that the nervous system attempts to minimize redundancy, maximizing mutual information between the world and the brain's representation of it while minimizing metabolic costs. This theory makes a number of testable predictions, including the well-known fact that retinal ganglion cells should be active only in response to either increases or decreases in light levels at within small regions of visual space &mdash; their receptive fields.

Working together with [Greg Field's lab](https://www.neuro.duke.edu/research/faculty-labs/field-lab), we've shown that patterns of alignment between different collections of receptive fields can also be explained using efficient coding theory. This was based on findings from Field lab ([paper](https://www.nature.com/articles/s41586-021-03317-5)), which led to surprising further theoretical results ([paper](https://www.nature.com/articles/s41586-021-03317-5)). In short, the most information-efficient receptive field arrangements are determined both by levels of noise in the system and the statistics of natural images.

Most recently, we've looked at what happens to mosaics as the number of neurons available for coding changes. There, [we found](https://www.biorxiv.org/content/10.1101/2022.08.29.505726v2) that greater numbers of available neurons leads to greater diversity in functionally defined cell types, starting with small temporally smoothing receptive fields and progressing toward larger temporally "differentiating" receptive fields.

- title: Autoencoding whole-brain dynamics
image: /images/website_VAEGAM_fig.png
caption: >-
A) VAE-GAM Model Schematic: brain volumes with signal of interest are
compressed to a lower dimensional representation using encoder
network. Sampled latents are then fed through decoder network to
yield a base map and separate spatial effect maps. Each effect map
is scaled by a potentially non-linear gain modelled using a Gaussian
Process. Variance is modeled separately on a per voxel basis.
B) Sample Effect Maps for VAE-GAM and GLM: effect maps for a visual
stimulation task dataset analysed using the proposed VAE-GAM approach
(top) vs. the tranditional (GLM) approach.
summary: >-
Variational autoencoders nested inside a Generalized Additive Model
framework for fMRI — better accounting for the spatial dependencies
of brain volumes than the traditional voxel-wise GLM, with
[Kevin LaBar's lab](http://www.labarlab.com).
body: |
Brain functional magnetic imaging data (fMRI) is one of the most popular modalities in human and clinical neuroscience as it allows researchers to investigate relationships between high-level cognitive functions, brain activity patterns and experimental variables of interest. Traditional fMRI analysis methods utilize a mass univariate approach, wherein a General Linear Model (GLM) is fit to each small volume pixel ("voxel") independently and researchers correct for an inflated false positive rate post hoc. This method has been widely adopted due to its simplicity and ability to produce separate spatial brain maps, capturing the inferred effects of experimental variables on brain-wide activity. However, it fails to account for the rich spatial and temporal information inherent to this modality.

In recent work, we've explored the idea of using variational autoencoder (VAE) methods nested inside a Generalized Additive Modeling (GAM) framework to model entire brain volumes together ([paper](https://static1.squarespace.com/static/59d5ac1780bd5ef9c396eda6/t/61080b1bcadb042a79974faf)). This approach better accounts for the spatial dependencies of fMRI data and generates separate, interpretable spatial maps capturing the inferred effects of experimental variables on whole-brain dynamics. In collaboration with [Kevin LaBar's lab](http://www.labarlab.com), we're working to expand on this work with the goal of characterizing brain spatio-temporal dynamics underlying transitions between emotional states in health and in disease.
125 changes: 125 additions & 0 deletions css/research.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
/* Research page card grid. Each project is a col-md-6 card; pairs of
cards sit side-by-side at desktop and collapse to one-per-row at
smaller widths via Bootstrap 3's responsive grid. Uses flexbox on the
row so paired cards match height even when bodies are different
lengths (Bootstrap 3's float-based grid stair-steps otherwise). */

.container {
max-width: 1100px;
}

.research-grid {
display: flex;
flex-wrap: wrap;
align-items: stretch;
}

.research-card {
/* Bootstrap's .col-md-6 has padding: 0 15px (the grid gutter).
Bump to 30px so the visible inter-card whitespace is ~60px
instead of the default ~30px. */
padding-left: 30px;
padding-right: 30px;
margin-bottom: 2.5em;
}

/* Inner padding offset so content doesn't visually butt against the
wider gutter. The thumbnail extends to the col edge via negative
margin so the image stays the full card width. */
.research-card > * {
padding-left: 0;
padding-right: 0;
}

.research-thumb {
margin: 0 0 1em;
padding: 0;
}

.research-thumb img {
display: block;
width: 100%;
height: auto;
}

.research-thumb figcaption {
font-size: 0.85em;
color: #666;
padding: 0.5em 0.5em 0;
font-style: italic;
}

.research-card h2 {
margin-top: 0;
}

.research-summary {
margin-bottom: 0.75em;
}

.research-details {
margin-top: 0;
}

.research-details > summary {
display: inline-block;
cursor: pointer;
color: #337ab7;
font-size: 0.9em;
list-style: none;
user-select: none;
}

.research-details > summary::-webkit-details-marker {
display: none;
}

.research-details > summary::before {
content: "\25B8"; /* ▸ closed indicator */
margin-right: 0.4em;
}

.research-details[open] > summary::before {
content: "\25BE"; /* ▾ open indicator */
}

/* Two-label pattern: keep both literal labels in the DOM so the
browser/screen reader has consistent button text, and use the
details[open] attribute to swap which one is visible. */
.research-details .show-open { display: none; }
.research-details[open] .show-closed { display: none; }
.research-details[open] .show-open { display: inline; }

.research-details > summary:hover,
.research-details > summary:focus {
color: #23527c;
text-decoration: underline;
}

.research-body {
font-size: 0.95em;
margin-top: 0.75em;
}

.research-body p:last-child {
margin-bottom: 0;
}

/* Body figures (the secondary images embedded inline in some projects)
should stay constrained to card width. */
.research-body figure {
margin: 1em 0;
}

.research-body figure img {
display: block;
width: 100%;
height: auto;
}

.research-body figcaption {
font-size: 0.85em;
color: #666;
padding-top: 0.5em;
font-style: italic;
}
28 changes: 28 additions & 0 deletions research.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
layout: default
title: Current Projects
desc: Check out what we're working on
nav: Research
extra_head: '<link rel="stylesheet" href="/css/research.css">'
---

<div class="row research-grid">
{% for project in site.data.research %}
<article class="col-md-6 research-card">
{% if project.image %}
<figure class="research-thumb">
<img src="{{ project.image }}" alt="{{ project.title }}" class="img-responsive">
{% if project.caption %}<figcaption>{{ project.caption }}</figcaption>{% endif %}
</figure>
{% endif %}
<h2>{{ project.title }}</h2>
<p class="research-summary">{{ project.summary | markdownify | remove: '<p>' | remove: '</p>' }}</p>
<details class="research-details">
<summary><span class="show-closed">Read more</span><span class="show-open">Read less</span></summary>
<div class="research-body">
{{ project.body | markdownify }}
</div>
</details>
</article>
{% endfor %}
</div>
Loading
Loading