A tool for exploring how information may move through the brain at rest, using both a continuous flow model and a raw effective connectivity graph.
mindVisualizer has three modes for looking at resting-state brain dynamics.
The first is a general information flow model — an estimated field of how information tends to move through the brain during rest. This is based on my preprint, where rDCIM is combined with anatomical geometry and streamlines to produce a continuous flow field. In this mode, the system does not just show which regions are connected, but also gives a spatial, dynamic picture of how information may propagate through the brain.
The second is a raw rDCIM connectivity mode. Here, the brain is represented more directly as a graph of connections between ROIs. You can think of it as a 3D graph showing how different brain regions are linked during rest through effective connectivity.
The third is an ROI flow mode — a dual-window visualization where the left panel shows particle flow through a learned resting-state manifold, and the right panel shows the corresponding ROI activation pattern. The manifold is a 2SDM-style low-dimensional embedding of resting-state fMRI, and the flow field is an MDN learned from trajectories through that space. As you trace a path, the system maps each position to a brain-region activation vector using kNN interpolation, and the LLM interprets which networks become more or less involved.
The flow mode is designed for interactive exploration.
You can press G to place a probe somewhere in the flow field. Once placed, the probe is carried by the flow itself, tracing a path through the brain as if it were "grabbed" by the underlying information dynamics.
Then, by pressing Shift + G, you can ask the LLM to explain the meaning of that exact flow trajectory. The model does this by identifying which anatomical regions the probe passed through and interpreting what that sequence of regions could mean based on neuroscience knowledge retrieved from the RAG database. In other words, it tries to explain what kind of information transfer this path may correspond to, and what functional role it could reflect in the resting brain.
This path-based interpretation is conceptually similar to the mechanism used in my related project, TraceScope.
The MDN flow field is built from: https://zenodo.org/records/18200415
In the raw rDCIM version, the interaction is different.
Here, you can initialize each ROI in the connectivity graph with some state. For example, you might assign a visual region a state like currently processing a human face, and do this across multiple regions to build a rough simulation of what the brain is doing at a given moment.
You can then select any ROI and perturb its state. The system propagates that perturbation through the rDCIM connectivity graph in real time, showing how the change spreads to other regions according to the effective connectivity structure.
At the end, the LLM provides an interpretation of what this perturbation changed in the broader brain network.
In this mode, you explore a learned resting-state manifold rather than anatomical brain space directly. The manifold is a 2SDM-style low-dimensional embedding, and the flow field is an MDN learned from trajectories through that space. You place a probe in the manifold, let it follow the learned flow, and the right panel updates in real time to show the corresponding ROI activation pattern. When you freeze the probe (Shift+G), the system compares the start and end ROI patterns and asks the LLM to interpret which networks became more or less involved, and what kind of resting-state transition that path may reflect.
Based on the original 2sDM paper and extended in my Brain State Manifold video.
python -m venv .venv
source .venv/bin/activate # Linux/Mac
#.venv\Scripts\activate # Windows
python -m pip install --upgrade pip setuptools wheel
pip install -e .
#REQUIRED This downloads the **Allen Human Brain Atlas** via BrainGlobe and builds cached lookup data
python setup_brain_data.py
# Extra parcellation (optional but strongly recommended - finer subregion labels in flow mode)
python scripts\setup_extra_parcellation.py
Create a .env file in the project root:
# Linux/macOS
printf 'OPENAI_API_KEY=sk-your-key-here\n' > .env
# Windows (PowerShell)
'OPENAI_API_KEY=sk-your-key-here' | Set-Content .envBy default the app uses GPT-5.4-mini (fast and cheap). Pass --hq to switch to GPT-5.4 for higher-quality interpretations.
Most data is already included in the repo or generated during setup. You only need to manually manage these:
Required for all LLM features. See Installation above.
Auto-generated when you run --global-state. You can manually edit this JSON to set expert-level neuroscience descriptions for each ROI.
Drop JSON or TXT files here to improve LLM interpretations. The default includes 24 brain region descriptions. Add your own medical-grade knowledge to get better results. See data/rag_knowledge/README.md for format docs.
Download with python scripts/download_roi_flow_data.py. Source: HuggingFace
Run python scripts/setup_extra_parcellation.py to build a combined atlas (127 regions from Harvard-Oxford + Julich-Brain). Auto-detected by flow mode. See data/extra_parcellation/README.md.
# Basic flow visualization (recommended: --no-rag, see note below)
python -m src.main --no-rag --hq
RAG note: For best results, add more medical-grade knowledge to
data/rag_knowledge/. Alternatively, use--no-ragto skip RAG entirely and let the LLM use its own training knowledge.
# If this is first execution pre-initialize global brain state, otherwise it will be empty and the LLM won't have any context to interpret perturbations. It auto fills rois based on some global brain state description
python examples/rdcim_propagation.py --hq --global-state "someone feeling anxious"
# Interactive graph visualization
python examples/rdcim_propagation.py --hq
Global state note: The
--global-statecurrently is using the llm to auto fill the inital states of roi, but for best results its recommend to manually create or edit data\brain_states_rdcim.json with expert-level neuroscience descriptions for each ROI
# Download data first (one-time)
pip install huggingface_hub
python scripts/download_roi_flow_data.py
# Run (defaults point to data/roi_flow/)
python examples/roi_flow_mode.py
python examples/roi_flow_mode.py --hq --debugG+ click — place a probe in the flow fieldShift + G— ask the LLM to interpret the probe trajectoryC— clear all probes Advanced:B— toggle branching mode+/-— speed scaleS— initialize brain statesShift + S— propagate state changes through probe path
Note: Place the probe a bit deeper into the brain for best results. Near-surface placements may not be picked up by the flow.
- click ROI — select a parcel
P— propose perturbations for selected ROIShift + P— propagate perturbation through graph
G+ click — place probe in manifoldShift + G— freeze probe, compute ROI delta, LLM interpretationC— clear probe and ROI display+/-— speed scale
Pass --debug to any script to print the full LLM prompt to console before each call. Useful for tuning prompts or debugging responses.
This repository combines several public resources. If you reuse, redistribute, or publish results generated with this project, please credit the original sources below and check their individual licenses / terms of use.
The continuous flow mode is built on the MDN flow artifact released with my preprint:
Continuous, Tract-Constrained Directional Vector Fields from rDCM Effective Connectivity Using Mixture Density Networks Zenodo record: https://zenodo.org/records/18200415
That work describes the fusion of:
- rDCM effective connectivity on the Schaefer-400 parcellation
- HCP-1065 whole-brain tractography geometry
into a continuous MDN-based directional flow field.
- Schaefer et al. (2018) — Schaefer-400 cortical parcellation
- Frässle et al. (2021) — regression dynamic causal modeling (rDCM)
- Royer et al. (2022) — MICA-MICs
- Van Essen et al. (2013) — Human Connectome Project
- Yeh et al. (2022) — population-based tract-to-region connectome / HCP-1065 tractography atlas
- Allen Human Brain Atlas (
allen_human_500um) via BrainGlobe AtlasAPI
The built-in combined atlas merges Harvard-Oxford (Desikan et al., 2006) and Julich-Brain (Amunts et al., 2020) for broad cortical/subcortical coverage with cytoarchitectonic detail. Custom NIfTI atlases in MNI space are also supported.
- Not medical or clinical advice. This is a research and visualization tool.
- The flow field is a model-based approximation, not a direct measurement of neural signal transmission.
- The LLM interpretations are neuroscience-informed explanations, not ground truth.
- RAG quality matters. Add curated medical knowledge to
data/rag_knowledge/for better LLM interpretations. - Preprint status. The underlying research is a preprint and has not yet undergone peer review. However, the app works with any MDN flow field — if you supply a better one, everything still works.
- Schaefer, A. et al. (2018). Local-Global Parcellation of the Human Cerebral Cortex from Intrinsic Functional Connectivity MRI. Cerebral Cortex.
- Frässle, S. et al. (2021). Regression dynamic causal modeling for resting-state fMRI. Human Brain Mapping.
- Royer, J. et al. (2022). An Open MRI Dataset For Multiscale Neuroscience. Scientific Data.
- Van Essen, D. C. et al. (2013). The WU-Minn Human Connectome Project: An overview. NeuroImage.
- Yeh, F.-C. et al. (2022). Population-based tract-to-region connectome of the human brain. Nature Communications.
Research use. See the original data sources and their respective licenses / terms for any reused external assets.

