Skip to content

A computational platform for studying spiking neural circuits developed by Dr. Pulin Gong's group at University of Sydney.

License

Notifications You must be signed in to change notification settings

BrainDynamicsUSYD/SpikeNet

Repository files navigation

SpikeNet

SpikeNet is a computational platform for studying spiking neural circuits. As a software, it has three stand-alone components.

  1. User interface for configuring spiking neuronal networks
  2. A c++ simulator
  3. User interface for parsing and post-analyzing the simulation results.

The design of SpikeNet provides the following four main features.

  • Configurability SpikeNet supports any user-defined structure of synaptic connectivity topologies, coupling strengths and conduction delays. It can be easily extended by developers to support any variations of integrate-and-fire neuron and synapse models.

  • Performance Simulation of spiking neuronal network quickly becomes computationally intensive if the number of neurons in the network exceeds a few thousand. To achieve superior performance, various measures have been taken at both algorithmic and implementation level.

  • User-friendly interface In SpikeNet, although c++ is used for heavy-duty computation, its user-interface is written in high-level programming language (Matlab) for user-friendliness and fast prototyping. This means SpikeNet does not require non-developer users to be familiar with c++.

  • Scalability The design of the SpikeNet c++ simulator readily supports parallel computing using Message Passing Interface (MPI). Additionally, the HDF5-based I/O file format provides big data handling capability. Also Portable Batch System (PBS) scripts for array jobs are provided if the you have access to a cluster.

Getting Started

Prerequisites

  • Autoconf, a standard tool on OSX and linux distributions
  • A c++ compiler that supports c++11 standard (GCC 4.2.1 or later; Intel C++ 12.0 or later)
  • HDF5 c/c++ API (open source), e.g.,
brew install Homebrew/homebrew-science/HDF5
  • Matlab (2013a or later) is optional but highly recommended
  • Portable Batch System (PBS) is optional but highly recommended

FAQ

Q: What if I am using Windows?

A: Sorry, you are on your own.

Q: What if I do not have Matlab or simply hate it?

A: You can either request I/O interface in Python from us or contribute to the project by translating the existing Matlab I/O interface into Python or other langangues.

Installing

  1. Ask for read permission from one of the contributors with admin rights.
  2. Make a new directory
mkdir tmp
cd tmp
  1. Clone the github repo
git clone https://github.com/BrainDynamicsUSYD/SpikeNet
  1. Build the c++ simulator
cd SpikeNet
autoconf
./configure
make
make clean
cd ..

Run the demo

Now you should see the simulator in the current directory, with which you can run simulations by creating input files using the Matlab user interface. Following are the steps to use the Matlab user interface.

  1. Make a new directory for storing data
mkdir tmp_data
  1. Start Matlab and set up the environment (in Matlab)
cd SpikeNet
addpath(genpath(cd))
  1. Generate the example input files (in Matlab)
cd ../tmp_data
main_demo
  1. Run the simulator with the input file
cd tmp_data
../simulator *in.h5
  1. Parse the output files into matlab .mat file, run some basic post-processing (in Matlab)
cd ../tmp_data
PostProcessYG()
  1. Load the .mat file and do some basic visualization (in Matlab)
d = dir('*RYG.mat')
R = load(d(1).name)
raster_plot(R,1)

High performance computing

For those who have access to a high-performance computing cluster with PBS, SpikeNet also provides bash script that fully automates the above Matlab --> c++ --> Matlab workflow for PBS job array submission. The script all_in_one.sh has the following features:

  1. It automatically detects which stage each array job (with a unique 4-digit integer array ID) has reached: pre-processing done, simulation done or post-simulation data parsing done.
  2. It will start each array job from the last unfinished stage instead of the first stage. This feature comes in handy when hundreds of array jobs end prematurely at different stages, say, due to the HPC being shut down unexpectedly, in which case simply a re-submission of the script will clean up the mess.
  3. It passes the array ID as an input argument to the matlab pre-processing script.
  4. It automatically saves a copy of the pre-processing Matlab script to the data directory when starting the array job with ID 0001.

Following are the steps to use the PBS script to run your arry jobs.

  1. Make sure you have set up your PBS environment correctly (e.g., modelue load HDF5-1.10.0) and rebuild the c++ simulator.
  2. Go to the tmp director and make a copy of the script
cp SpikeNet/shell_scripts/all*bak all_in_one.sh
  1. Change it to executable
chmod +x all_in_one.sh
  1. Edit the following variables in the bash script accordingly:
MATLAB_SOURCE_PATH_2=`your_path'
MATLAB_PRE_PROCESS_FUNC=`your_functions'
  1. Make a directory for PBS output
mkdir PBSout
  1. Submit the PBS array job
qsub -t 1-X -q queue_name all_in_one.sh

If your version PBS system uses -J instead of -t for array job, you also need to change $PBS_ARRAYID into $PBS_ARRAY_INDEX in the all_in_one.sh script.

  1. Once the array job is finished, you can collect the data from the array job for post-analysis, for example (in matlab)
cd tmp_data
[mean_firing_rate, arrayID] = CollectVectorYG('Analysis','mean(Analysis.rate{1})');
plot(arrayID, mean_firing_rate);
  1. Be aware that the seed for random number generator in PBS arry jobs may need to be manually set or remains as constant otherwise.

The workflow

The typical workflow of SpikeNet is as shown in the following flowchart.

alt text alt text

Notes:

  • Although the c++ simulator accepts input files with any names, A-T1 is the recommended and default naming format.
  • A is a 4-digit PBS array ID number.
  • T1 is a timestamp identifying when the in.h5 file was generated.
  • Similarly, T2 is a timestamp identifying when the out.h5 file was generated, which allows multiple simulations to be run for the same in.h5 file.
  • The restart_TreeID.h5 files allow the users to directly modify any aspect of a simulation and restart it from there.
  • The TreeID is automatically generated to ensure that the users can make as many different modifications and restart the simulation as many times as desired.
  • For technical reasons, the time series data sampled from each neuron population or synapse group, identified by an ID number, during simulation are stored in separate samp.h5 files.
  • The dashed lines mean that the c++ simulator and the PostProcessYG() matlab function will automatically look for those auxiliary input files based on the information contained in the main input files.

More details

For MPI jobs with SpikeNet, please contact Yifan Gu for more technical details.

Also the full documentation is available here.

Guozhang Chen has a video to introduce SpikeNet but unfortunately, it missed the first 15 min.

Reproduce published papers

To reproduce the dynamics reported in Gu et al, 2019, please submit the PBS script ./shell_scripts/YG_all_in_one.sh.

To reproduce the dynamics reported in Chen & Gong, 2019, please submit the PBS script ./shell_scripts/GC_all_in_one.sh.

Authors

  • Yifan Gu - Chief architect and initial work - yigu8115
  • James A Henderson - HDF5-based I/O and learning schemes - JamesAHenderson
  • Guozhang Chen - The rest of functions - ifgovh

See also the list of contributors who participated in this project.

Citation

If you find this code useful in your research, please cite:

@article{10.1371/journal.pcbi.1006902, author = {Gu, Yifan AND Qi, Yang AND Gong, Pulin}, journal = {PLOS Computational Biology}, publisher = {Public Library of Science}, title = {Rich-club connectivity, diverse population coupling, and dynamical activity patterns emerging from local cortical circuits}, year = {2019}, month = {04}, volume = {15}, url = {https://doi.org/10.1371/journal.pcbi.1006902}, pages = {1-34},
number = {4}, doi = {10.1371/journal.pcbi.1006902} }

@article{ author = {Chen, Guozhang AND Gong, Pulin}, journal = {Nature communications},
title = {Computing by modulating spontaneous cortical activity patterns as a mechanism of active visual processing}, year = {2019}, month = {10}, volume = {10}, url = {https://www.nature.com/articles/s41467-019-12918-8#citeas},
doi = {10.1038/s41467-019-12918-8} }

# License

This project is licensed under the Apache License 2.0 - see the [LICENSE.md](LICENSE.md) file for details.