Status: In Development. All tasks are subject to changes right now.
NeuroGym is a curated collection of neuroscience tasks with a common interface. The goal is to facilitate training of neural network models on neuroscience tasks.
Documentation: https://neurogym.github.io/
NeuroGym inherits from the machine learning toolkit Gym by OpenAI, and thus allows a wide range of well established machine learning algorithms to be easily trained on behavioral paradigms relevant for the neuroscience community. NeuroGym also incorporates several properties and functions (e.g. continuous-time and trial-based tasks) that are important for neuroscience applications. The toolkit also includes various modifier functions that allow easy configuration of new tasks.
You can perform a minimal install of neurogym
with:
git clone https://github.com/neurogym/neurogym.git
cd neurogym
pip install -e .
Or a full install by replacing the last command with pip install -e '.[all]'
Currently implemented tasks can be found here.
Wrappers (see list) are short scripts that allow introducing modifications the original tasks. For instance, the Random Dots Motion task can be transformed into a reaction time task by passing it through the reaction_time wrapper. Alternatively, the combine wrapper allows training an agent in two different tasks simultaneously.
NeuroGym is compatible with most packages that use OpenAI gym. In this example jupyter notebook we show how to train a neural network with reinforcement learning algorithms using the Stable Baselines toolbox.
Contributing new tasks should be easy. You can contribute tasks using the regular OpenAI gym format. If your task has a trial/period structure, this template provides the basic structure that we recommend a task to have:
from gym import spaces
import neurogym as ngym
class YourTask(ngym.PeriodEnv):
metadata = {}
def __init__(self, dt=100, timing=None, extra_input_param=None):
super().__init__(dt=dt)
def new_trial(self, **kwargs):
"""
new_trial() is called when a trial ends to generate the next trial.
Here you have to set:
The trial periods: fixation, stimulus...
Optionally, you can set:
The ground truth: the correct answer for the created trial.
"""
def _step(self, action):
"""
_step receives an action and returns:
a new observation, obs
reward associated with the action, reward
a boolean variable indicating whether the experiment has end, done
a dictionary with extra information:
ground truth correct response, info['gt']
boolean indicating the end of the trial, info['new_trial']
"""
return obs, reward, done, {'new_trial': new_trial, 'gt': gt}
-
Contact
Manuel Molano (manuelmolanomazon@gmail.com). Guangyu Robert Yang (gyyang.neuro@gmail.com).
-
Contributors (listed in chronological order)