Skip to content

Explaining visual brain representations using predictive coding networks.

License

Notifications You must be signed in to change notification settings

thefonseca/algonauts

Repository files navigation

PredNet - Algonauts

Deep predictive coding networks are neuroscience-inspired unsupervised learning models that learn to predict future sensory states. We build upon the PredNet implementation by Lotter, Kreiman, and Cox (2016) to investigate if predictive coding representations are useful to predict brain activity in the visual cortex. We use representational similarity analysis (RSA) to compare PredNet representations to functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) data from the Algonauts Project (Cichy et al., 2019).

In contrast to previous findings in the literature (Khaligh-Razavi & Kriegeskorte, 2014), we report empirical data suggesting that unsupervised models trained to predict frames of videos may outperform supervised image classification baselines.

Code

This repository contains supporting code for PredNet training, fine-tuning, feature extraction, and evaluation. We also use the Algonauts development kit, which is not distributed here. Experiment workflow is as follows:

About

Explaining visual brain representations using predictive coding networks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages