Skip to content

alkino/CoreNeuron

 
 

Repository files navigation

Build Status

CoreNEURON

Optimised simulator engine for NEURON

CoreNEURON is a compute engine for the NEURON simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with small memory footprint and optimal performance.

Features / Compatibility

CoreNEURON is designed as a library within the NEURON simulator and can transparently handle all spiking network simulations including gap junction coupling with the fixed time step method. In order to run a NEURON model with CoreNEURON:

  • MOD files should be THREADSAFE
  • If random number generator is used then Random123 should be used instead of MCellRan4
  • POINTER variables need to be converted to BBCOREPOINTER (details here)

Dependencies

In addition to this, you will need other NEURON dependencies such as Python, Flex, Bison etc.

Installation

CoreNEURON is now integrated into the development version of the NEURON simulator. If you are a NEURON user, the preferred way to install CoreNEURON is to enable extra build options during NEURON installation as follows:

  1. Clone the latest version of NEURON:
git clone https://github.com/neuronsimulator/nrn
cd nrn
  1. Create a build directory:
mkdir build
cd build
  1. Load software dependencies

    Currently CoreNEURON relies on compiler auto-vectorisation and hence we advise to use one of Intel, Cray, or PGI compilers to ensure vectorized code is generated. This constraint will be removed in the near future with the integration of the NMODL project.

    HPC systems often use a module system to select software. For example, you can load the compiler, cmake, and python dependencies using module as follows:

module load intel intel-mpi python cmake

Note that if you are building on Cray system with the GNU toolchain, you have to set following environment variable:

export CRAYPE_LINK_TYPE=dynamic
  1. Run CMake with the appropriate options and additionally enable CoreNEURON with -DNRN_ENABLE_CORENEURON=ON option:
cmake .. \
 -DNRN_ENABLE_CORENEURON=ON \
 -DNRN_ENABLE_INTERVIEWS=OFF \
 -DNRN_ENABLE_RX3D=OFF \
 -DCMAKE_INSTALL_PREFIX=$HOME/install
  1. If you would like to enable GPU support with OpenACC, make sure to use -DCORENRN_ENABLE_GPU=ON option and use the PGI/NVIDIA HPC SDK compilers with CUDA. For example,
cmake .. \
 -DNRN_ENABLE_CORENEURON=ON \
 -DCORENRN_ENABLE_GPU=ON
 -DNRN_ENABLE_INTERVIEWS=OFF \
 -DNRN_ENABLE_RX3D=OFF \
 -DCMAKE_INSTALL_PREFIX=$HOME/install
 -DCMAKE_C_COMPILER=nvc \
 -DCMAKE_CXX_COMPILER=nvc++

NOTE : If the CMake command fails, please make sure to delete temporary CMake cache files (CMakeCache.txt) before rerunning CMake.

  1. Build and Install : once the configure step is done, you can build and install the project as:

    make -j install

Building Model

Once NEURON is installed with CoreNEURON support, you need setup setup the PATH and PYTHONPATH environment variables as:

export PYTHONPATH=$HOME/install/lib/python:$PYTHONPATH
export PATH=$HOME/install/bin:$PATH

As in a typical NEURON workflow, you can use nrnivmodl to translate MOD files:

nrnivmodl mod_directory

In order to enable CoreNEURON support, you must set the -coreneuron flag. Make sure to necessary modules (compilers, CUDA, MPI etc) are loaded before using nrnivmodl:

nrnivmodl -coreneuron mod_directory

If you see any compilation error then one of the mod files might be incompatible with CoreNEURON. Please open an issue with an example and we can help to fix it.

Running Simulations

With CoreNEURON, existing NEURON models can be run with minimal changes. For a given NEURON model, we typically need to adjust as follows:

  1. Enable cache effficiency : h.cvode.cache_efficient(1)

  2. Enable CoreNEURON :

    from neuron import coreneuron
    coreneuron.enable = True
    
  3. If GPU support is enabled during build, enable GPU execution using :

    coreneuron.gpu = True
    
  4. Use psolve to run simulation after initialization :

    h.stdinit()
    pc.psolve(h.tstop)
    

Here is a simple example model that runs with NEURON first, followed by CoreNEURON and compares results between NEURON and CoreNEURON execution:

import sys
from neuron import h, gui

# setup model
h('''create soma''')
h.soma.L=5.6419
h.soma.diam=5.6419
h.soma.insert("hh")
h.soma.nseg = 3
ic = h.IClamp(h.soma(.25))
ic.delay = .1
ic.dur = 0.1
ic.amp = 0.3

ic2 = h.IClamp(h.soma(.75))
ic2.delay = 5.5
ic2.dur = 1
ic2.amp = 0.3

h.tstop = 10

# make sure to enable cache efficiency
h.cvode.cache_efficient(1)

pc = h.ParallelContext()
pc.set_gid2node(pc.id()+1, pc.id())
myobj = h.NetCon(h.soma(0.5)._ref_v, None, sec=h.soma)
pc.cell(pc.id()+1, myobj)

# First run NEURON and record spikes
nrn_spike_t = h.Vector()
nrn_spike_gids = h.Vector()
pc.spike_record(-1, nrn_spike_t, nrn_spike_gids)
h.run()

# copy vector as numpy array
nrn_spike_t = nrn_spike_t.to_python()
nrn_spike_gids = nrn_spike_gids.to_python()

# now run CoreNEURON
from neuron import coreneuron
coreneuron.enable = True

# for GPU support
# coreneuron.gpu = True

coreneuron.verbose = 0
h.stdinit()
corenrn_all_spike_t = h.Vector()
corenrn_all_spike_gids = h.Vector()
pc.spike_record(-1, corenrn_all_spike_t, corenrn_all_spike_gids )
pc.psolve(h.tstop)

# copy vector as numpy array
corenrn_all_spike_t = corenrn_all_spike_t.to_python()
corenrn_all_spike_gids = corenrn_all_spike_gids.to_python()

# check spikes match between NEURON and CoreNEURON
assert(nrn_spike_t == corenrn_all_spike_t)
assert(nrn_spike_gids == corenrn_all_spike_gids)

h.quit()

We can run this model as:

python test.py

You can find HOC example here.

FAQs

What results are returned by CoreNEURON?

At the end of the simulation CoreNEURON transfers by default : spikes, voltages, state variables, NetCon weights, all Vector.record, and most GUI trajectories to NEURON. These variables can be recorded using regular NEURON API (e.g. Vector.record or spike_record).

How can I pass additional flags to build?

One can specify C/C++ optimization flags specific to the compiler with -DCMAKE_CXX_FLAGS and -DCMAKE_C_FLAGS options to the CMake command. For example:

cmake .. -DCMAKE_CXX_FLAGS="-O3 -g" \
         -DCMAKE_C_FLAGS="-O3 -g" \
         -DCMAKE_BUILD_TYPE=CUSTOM

By default, OpenMP threading is enabled. You can disable it with -DCORENRN_ENABLE_OPENMP=OFF

GPU enabled build is failing with inlining related errors, what to do?

If there are large functions / procedures in the MOD file that are not inlined by the compiler, you may need to pass additional C++ flags to PGI compiler. You can try following CXX flags:

-DCMAKE_CXX_FLAGS="-O2 -Minline=size:1000,levels:100,totalsize:40000,maxsize:4000"

For other errors, please open an issue.

Developer Build

Building standalone CoreNEURON

If you want to build the standalone CoreNEURON version, first download the repository as:

git clone https://github.com/BlueBrain/CoreNeuron.git

Once the appropriate modules for compiler, MPI, CMake are loaded, you can build CoreNEURON with:

mkdir CoreNeuron/build && cd CoreNeuron/build
cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/install
make -j && make install

If you don't have MPI, you can disable the MPI dependency using the CMake option -DCORENRN_ENABLE_MPI=OFF. Once build is successful, you can run tests using:

make test
Compiling MOD files

In order to compile mod files, one can use nrnivmodl-core as:

/install-path/bin/nrnivmodl-core mod-dir

This will create a special-core executable under <arch> directory.

Building with GPU support

CoreNEURON has support for GPUs using the OpenACC programming model when enabled with -DCORENRN_ENABLE_GPU=ON. Below are the steps to compile with PGI compiler:

module purge all
module load nvidia-hpc-sdk/20.11 cuda/11 cmake openmpi # change pgi, cuda and mpi modules
cmake .. -DCORENRN_ENABLE_GPU=ON -DCMAKE_INSTALL_PREFIX=$HOME/install -DCMAKE_C_COMPILER=nvc -DCMAKE_CXX_COMPILER=nvc++
make -j && make install

Note that the CUDA Toolkit version should be compatible with the PGI compiler installed on your system. Otherwise, you have to add extra C/C++ flags. For example, if we are using CUDA Toolkit 9.0 installation but PGI default target is CUDA 8.0 then we have to add :

-DCMAKE_C_FLAGS:STRING="-O2 -ta=tesla:cuda9.0" -DCMAKE_CXX_FLAGS:STRING="-O2 -ta=tesla:cuda9.0"

You have to run GPU executable with the --gpu flag. Make sure to enable cell re-ordering mechanism to improve GPU performance using --cell_permute option (permutation types : 2 or 1):

mpirun -n 1 ./bin/nrniv-core --mpi --gpu --tstop 100 --datpath ../tests/integration/ring --cell-permute 2

Note: If your model is using Random123 random number generator, you cannot use the same executable for CPU and GPU runs. We suggest to install separate NEURON with CoreNEURON for CPU and GPU simulations. This will be fixed in future releases.

Running tests with SLURM

If you have a different mpi launcher (than mpirun), you can specify it during cmake configuration as:

cmake .. -DTEST_MPI_EXEC_BIN="mpirun" \
         -DTEST_EXEC_PREFIX="mpirun;-n;2" \
         -DTEST_EXEC_PREFIX="mpirun;-n;2" \
         -DAUTO_TEST_WITH_SLURM=OFF \
         -DAUTO_TEST_WITH_MPIEXEC=OFF \

You can disable tests using with options:

cmake .. -CORENRN_ENABLE_UNIT_TESTS=OFF
CLI Options

To see all CLI options for CoreNEURON, see ./bin/nrniv-core -h.

Formatting CMake and C++ Code

In order to format code with cmake-format and clang-format tools, before creating a PR, enable below CMake options:

cmake .. -DCORENRN_CLANG_FORMAT=ON -DCORENRN_CMAKE_FORMAT=ON
make -j install

and now you can use cmake-format or clang-format targets:

make cmake-format
make clang-format

Citation

If you would like to know more about CoreNEURON or would like to cite it, then use the following paper:

  • Pramod Kumbhar, Michael Hines, Jeremy Fouriaux, Aleksandr Ovcharenko, James King, Fabien Delalondre and Felix Schürmann. CoreNEURON : An Optimized Compute Engine for the NEURON Simulator (doi.org/10.3389/fninf.2019.00063)

Support / Contribuition

If you see any issue, feel free to raise a ticket. If you would like to improve this library, see open issues.

You can see current contributors here.

License

Funding

CoreNEURON is developed in a joint collaboration between the Blue Brain Project and Yale University. This work has been funded by the EPFL Blue Brain Project (funded by the Swiss ETH board), NIH grant number R01NS11613 (Yale University), the European Union Seventh Framework Program (FP7/20072013) under grant agreement n◦ 604102 (HBP) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Grant Agreement n◦ 720270 (Human Brain Project SGA1) and Grant Agreement n◦ 785907 (Human Brain Project SGA2).

About

Simulator optimized for large scale neural network simulations.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 85.6%
  • CMake 7.0%
  • C 2.2%
  • Shell 2.0%
  • AMPL 1.9%
  • Cuda 0.6%
  • Other 0.7%