Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Enable MPI parallelism with ParallelContext #79

Merged
merged 49 commits into from Jul 14, 2020
Merged
Show file tree
Hide file tree
Changes from 43 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
54af4fe
ENH: enable NEURON parallelism
Sep 23, 2019
d328e8b
MAINT: converge code paths for MPI and joblibs
Sep 27, 2019
ba72481
DOC: update changelog with MPI parallel feature
Sep 25, 2019
ddb05a3
TST: add test for MPI and Network refactoring
Sep 24, 2019
22ebede
MAINT: fix flake8 errors in dipole.py
Sep 27, 2019
8681f37
MAINT: remove MPI parallel example
Sep 27, 2019
485a2fc
TST: testing that MPI spawn works
Sep 27, 2019
cdfc227
ENH: get_rank() exposed in API
Sep 27, 2019
e31a295
TST: convert mpi test from mpi4py to mpiexec
Sep 27, 2019
e4f63a1
MAINT: update evoked example for mpiexec
Feb 25, 2020
386f23d
MAINT: fix tests after rebase
Feb 26, 2020
5fe137d
MAINT: make test_hnn_core MPI-aware and reuse
Feb 26, 2020
bfaa4da
DOC: fix whats_new.rst
Mar 5, 2020
1f7198f
MAINT: new class _neuron_network
Mar 18, 2020
695be53
ENH: abstract MPI simulations with new backend
Mar 20, 2020
4e47615
MAINT: create joblib backend, refactor NEURON code
Mar 20, 2020
3866b26
TST: add joblib and update MPI
Mar 20, 2020
1a3bcba
TST: update travis config for osx build
Mar 20, 2020
cc156a3
MAINT: fixup after rebasing
Jul 7, 2020
8391995
MAINT: fixes after rebasing
Jul 7, 2020
ed881ec
MAINT: updates for PR comments
Jul 7, 2020
c4aec61
MAINT: rename MPI example
Jul 7, 2020
944fb9f
MAINT: remove _shutdown method
Jul 7, 2020
1317301
MAINT: flake8 in parallel_backends.py
Jul 7, 2020
d28a632
DOC: update api.rst and whats_new.rst with MPI
Jul 7, 2020
1cc993d
MAINT: only run 1 trial for somato example
Jul 7, 2020
0441020
MAINT: address comments in PR
Jul 7, 2020
6947591
TST: install psutil mpi4py joblib prereqs
Jul 7, 2020
0387006
DOC: fix API.rst and Network docstrings
Jul 7, 2020
88d1b09
TST: install openmpi in CircleCI
Jul 7, 2020
65e39fd
MAINT: put MPI example in plot_simulate_evoked.py
Jul 8, 2020
0a4bbe6
BUG: MPI: specify full path of python interpreter
Jul 8, 2020
2e2d5f6
TST: install NEURON with pip for TravisCI
Jul 8, 2020
7f6226c
DOC: setup CircleCI conda paths
Jul 8, 2020
007f4ef
MAINT: truly suppress stderr while running sim
Jul 8, 2020
bfe9429
TST: fix tests after rebase
Jul 8, 2020
b77b400
TST: add back installing hnn_core
Jul 8, 2020
91b0455
DOC: upates to CONTRIBUTING and what_new.rst
Jul 8, 2020
5aef954
BUG: aggregate spiking data for MPI
Jul 8, 2020
4023099
MAINT: make returning sim data consistent for MPI
Jul 8, 2020
4639978
MAINT: rearrange imports
Jul 9, 2020
cb8dea4
DOC: MPI parallel documentation
Jul 9, 2020
0873f70
MAINT: simplify handling of processor detection
Jul 9, 2020
c039dcf
MAINT: move parallel_backend.py imports to top
Jul 13, 2020
c61d265
MAINT: use _GLOBAL instead of GLOBAL to avoid collisions
Jul 14, 2020
b3aaae2
API: remove n_jobs parameter from simulate_dipole
Jul 14, 2020
2830272
BUG: tests need _BACKEND set in simulate_dipole
Jul 14, 2020
6fb8440
MAINT: flake8 and address PR comments
Jul 14, 2020
fdd6857
DOC: fixup documentation
Jul 14, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
17 changes: 11 additions & 6 deletions .circleci/config.yml
Expand Up @@ -15,40 +15,45 @@ jobs:
wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh;
chmod +x ~/miniconda.sh;
~/miniconda.sh -b -p ~/miniconda;
echo "export PATH=~/miniconda/bin:$PATH" >> $BASH_ENV;

- run:
name: Install openmpi
command: |
sudo apt-get install libopenmpi-dev openmpi-bin

- run:
name: Setup Python environment
command: |
export PATH=~/miniconda/bin:$PATH
conda update --yes --quiet conda
conda create -n testenv --yes pip python=3.6
source activate testenv
conda install --yes scipy numpy matplotlib
pip install mne
pip install mne psutil mpi4py joblib

- run:
name: Setup doc building stuff
command: |
source activate testenv
source ~/miniconda/bin/activate testenv
pip install sphinx numpydoc sphinx-gallery sphinx_bootstrap_theme pillow

- run:
name: Setup Neuron
command: |
source activate testenv
source ~/miniconda/bin/activate testenv
pip install NEURON

- run:
name: Setup hnn-core
command: |
source activate testenv
source ~/miniconda/bin/activate testenv
make
python setup.py develop

- run:
name: Build the documentation
command: |
source activate testenv
source ~/miniconda/bin/activate testenv
cd doc/ && make html

- persist_to_workspace:
Expand Down
55 changes: 33 additions & 22 deletions .travis.yml
@@ -1,45 +1,56 @@
language: python
language: c

env:
- PYTHON_VERSION=3.6
- PYTHON_VERSION=3.7

addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- libopenmpi-dev
matrix:
include:
# Linux
- os: linux
name: "Linux full"
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- libopenmpi-dev
- openmpi-bin
# OSX
- os: osx
name: "MacOS full"


before_install:
- wget -q http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh
- if [ "${TRAVIS_OS_NAME}" == "linux" ]; then
wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
else
wget -q https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O miniconda.sh;
fi;
- chmod +x miniconda.sh
- ./miniconda.sh -b -p /home/travis/miniconda
- export PATH=/home/travis/miniconda/bin:$PATH
- ./miniconda.sh -b -p ${HOME}/miniconda
- export PATH=${HOME}/miniconda/bin:$PATH
- conda update --yes --quiet conda

install:
- install_prefix=~
- conda create -n testenv --yes pip python=${PYTHON_VERSION}
- source activate testenv
- conda install --yes scipy numpy matplotlib
- |
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
conda install --yes mpi4py openmpi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do I try your PR on mac? I tried this command to install and it's stuck for the last 15 minutes ...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the most part these commands work for me. Very well could have a different build environment though. Could you send me any output you get after this?

Note, I had to do:

conda activate testenv
# fix on osx for coverals
pip install urllib3==1.24
pip install coverage coveralls

else
pip install mpi4py
fi
- pip install flake8 pytest pytest-cov
- pip install mpi4py mne
- pip install mne psutil joblib
- pip install coverage coveralls
- |
git clone https://github.com/neuronsimulator/nrn
cd nrn
./build.sh
./configure --with-nrnpython=python3 --without-iv --prefix=${install_prefix}
make -j4
make install -j4
cd src/nrnpython/
python3 setup.py install
- pip install NEURON
- |
cd $TRAVIS_BUILD_DIR
export PATH=$PATH:${install_prefix}/x86_64/bin
make
python setup.py develop

script:
- flake8 --count hnn_core
- py.test --cov=hnn_core hnn_core/tests/
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.rst
Expand Up @@ -35,7 +35,7 @@ Building the documentation
The documentation can be built using sphinx. For that, please additionally
install the following::

$ pip install matplotlib sphinx numpydoc sphinx-gallery sphinx_bootstrap_theme pillow
$ pip install matplotlib sphinx numpydoc sphinx-gallery sphinx_bootstrap_theme pillow mpi4py joblib psutil

You can build the documentation locally using the command::

Expand Down
81 changes: 77 additions & 4 deletions README.rst
Expand Up @@ -28,7 +28,11 @@ Dependencies
* scipy
* numpy
* matplotlib
* joblib (optional for parallel processing)
* joblib (optional for running trials simultaneously)
* mpi4py (optional for running each trial in parallel across cores). Also depends on:

* openmpi or other mpi platform installed on system
* psutil

Installation
============
Expand All @@ -37,9 +41,7 @@ We recommend the `Anaconda Python distribution <https://www.continuum.io/downloa

$ conda install numpy matplotlib scipy

For joblib, you can do::

$ pip install joblib
For using more than one CPU core, see :ref:`Parallel backends` below.

Additionally, you would need Neuron which is available here: `https://neuron.yale.edu/neuron/ <https://neuron.yale.edu/neuron/>`_

Expand All @@ -64,6 +66,77 @@ To check if everything worked fine, you can do::

and it should not give any error messages.

.. _Parallel backends:

Parallel backends
=================

Two options are available for making use of multiple CPU cores. The first runs multiple trials in parallel with joblib. Alternatively, you can run each trial across multiple cores to reduce the runtime.

Joblib
------

This is the default backend and will execute multiple trials at the same time, with each trial running on a separate core in "embarrassingly parallel" execution. Note that with only 1 trial, there will be no parallelism.

**Dependencies**::

$ pip install joblib

**Usage**::

dpls = simulate_dipole(net, n_trials=2)

MPI
------

This backend will use MPI (Message Passing Interface) on the system to split neurons across CPU cores (processors) and reduce the simulation time as more cores are used.

**Linux Dependencies**::

$ sudo apt-get install libopenmpi-dev openmpi-bin
$ pip install mpi4py psutil

**MacOS Dependencies**::

$ conda install yes openmpi mpi4py
$ pip install psutil

**MacOS Environment**::

$ export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib

Alternatively, run the commands below will avoid needing to run the export command every time a new shell is opened::

cd ${CONDA_PREFIX}
mkdir -p etc/conda/activate.d etc/conda/deactivate.d
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be:

Suggested change
mkdir -p etc/conda/activate.d etc/conda/deactivate.d
mkdir -p /etc/conda/activate.d etc/conda/deactivate.d

I think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The path is supposed to be ${CONDA_PREFIX}/etc/conda/activate.d

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed :-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which is accomplished by the current code

echo "export OLD_LD_LIBRARY_PATH=\$LD_LIBRARY_PATH" >> etc/conda/activate.d/env_vars.sh
echo "export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:\${CONDA_PREFIX}/lib" >> etc/conda/activate.d/env_vars.sh
echo "export LD_LIBRARY_PATH=\$OLD_LD_LIBRARY_PATH" >> etc/conda/deactivate.d/env_vars.sh
echo "unset OLD_LD_LIBRARY_PATH" >> etc/conda/deactivate.d/env_vars.sh

**Test MPI**::

$ mpiexec -np 2 nrniv -mpi -python -c 'from neuron import h; from mpi4py import MPI; \
print("Hello from proc %d" % MPI.COMM_WORLD.Get_rank()); \
h.quit()'
numprocs=2
NEURON -- VERSION 7.7.2 7.7 (2b7985ba) 2019-06-20
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2018
See http://neuron.yale.edu/neuron/credits

Hello from proc 0
Hello from proc 1

Verifies that MPI, NEURON, and Python are all working together.

**Usage**::

from hnn_core import MPIBackend

# set n_procs to the number of processors MPI can use (up to number of cores on system)
with MPIBackend(n_procs=2):
dpls = simulate_dipole(net, n_trials=1)

Bug reports
===========

Expand Down
9 changes: 9 additions & 0 deletions doc/api.rst
Expand Up @@ -30,6 +30,15 @@ Simulation (:py:mod:`hnn_core`):
Params
read_params

.. currentmodule:: hnn_core.parallel_backends

.. autosummary::
:toctree: generated/

MPIBackend
JoblibBackend


Inputs and Outputs (:py:mod:`hnn_core`):

.. currentmodule:: hnn_core
Expand Down
12 changes: 9 additions & 3 deletions doc/whats_new.rst
Expand Up @@ -17,11 +17,13 @@ Changelog

- Add ability to simulate multiple trials in parallel using joblibs, by `Mainak Jas`_ in `#44 <https://github.com/jonescompneurolab/hnn-core/pull/44>`_

- Rhythmic inputs can now be turned off by setting their conductance weights to 0 instead of setting their start times to exceed the simulation stop time, by `Ryan Thorpe`_ in `#105 <https://github.com/jonescompneurolab/hnn-core/pull/105>`_

- Reader for parameter files, by `Blake Caldwell`_ in `#80 <https://github.com/jonescompneurolab/hnn-core/pull/80>`_

- Add plotting of voltage at soma to inspect firing pattern of cells, by `Mainak Jas`_ in `#86 <https://github.com/jasmainak/hnn-core/pull/86>`_
- Add plotting of voltage at soma to inspect firing pattern of cells, by `Mainak Jas`_ in `#86 <https://github.com/jonescompneurolab/hnn-core/pull/86>`_

- Rhythmic inputs can now be turned off by setting their conductance weights to 0 instead of setting their start times to exceed the simulation stop time, by `Ryan Thorpe`_ in `#105 <https://github.com/jonescompneurolab/hnn-core/pull/105>`_
- Add ability to simulate a single trial in parallel across cores using MPI, by `Blake Caldwell`_ in `#79 <https://github.com/jonescompneurolab/hnn-core/pull/79>`_

Bug
~~~
Expand All @@ -35,10 +37,14 @@ Bug
API
~~~

- Make a context manager for Network class, by `Mainak Jas`_ and `Blake Caldwell`_ in `#86 <https://github.com/jasmainak/hnn-core/pull/86>`_
- Make a context manager for Network class, by `Mainak Jas`_ and `Blake Caldwell`_ in `#86 <https://github.com/jonescompneurolab/hnn-core/pull/86>`_

- Create Spikes class, add write methods and read functions for Spikes and Dipole classes, by `Ryan Thorpe`_ in `#96 <https://github.com/jonescompneurolab/hnn-core/pull/96>`_

- Remove `n_jobs` parameter for instantiating Network class, by `Blake Caldwell`_ in `#79 <https://github.com/jonescompneurolab/hnn-core/pull/79>`_

- Make a context manager for parallel backends (JoblibBackend, MPIBackend), by `Blake Caldwell`_ in `#79 <https://github.com/jonescompneurolab/hnn-core/pull/79>`_

.. _Mainak Jas: http://jasmainak.github.io/
.. _Blake Caldwell: https://github.com/blakecaldwell
.. _Ryan Thorpe: https://github.com/rythorpe
Expand Down
18 changes: 9 additions & 9 deletions examples/plot_firing_pattern.py
Expand Up @@ -16,6 +16,7 @@

import hnn_core
from hnn_core import read_params, Network
from hnn_core.neuron import NeuronNetwork

hnn_core_root = op.join(op.dirname(hnn_core.__file__), '..')

Expand All @@ -28,11 +29,12 @@
# Now let's build the network
import matplotlib.pyplot as plt

with Network(params) as net:
net.build()
net = Network(params)
with NeuronNetwork(net) as neuron_network:
neuron_network.cells[0].plot_voltage()

# The cells are stored in the network object as a list
cells = net.cells
cells = neuron_network.cells
print(cells[:5])

# We have different kinds of cells with different cell IDs (gids)
Expand All @@ -41,15 +43,13 @@
print(cells[gid].name)

# We can plot the firing pattern of individual cells
net.cells[0].plot_voltage()
neuron_network.cells[0].plot_voltage()
plt.title('%s (gid=%d)' % (cells[0].name, gid))

###############################################################################
# Let's do this for the rest of the cell types with a new Network object
with Network(params) as net:
net.build()

# Let's do this for the rest of the cell types with a new NeuronNetwork object
with NeuronNetwork(net) as neuron_network:
fig, axes = plt.subplots(1, 2, sharey=True, figsize=(8, 4))
for gid, ax in zip([35, 170], axes):
net.cells[gid].plot_voltage(ax)
neuron_network.cells[gid].plot_voltage(ax)
ax.set_title('%s (gid=%d)' % (cells[gid].name, gid))
13 changes: 12 additions & 1 deletion examples/plot_simulate_evoked.py
Expand Up @@ -9,6 +9,7 @@

# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Sam Neymotin <samnemo@gmail.com>
# Blake Caldwell <blake_caldwell@brown.edu>

import os.path as op
import tempfile
Expand Down Expand Up @@ -63,6 +64,16 @@

params.update({'sync_evinput': True})
net_sync = Network(params)
dpls_sync = simulate_dipole(net_sync, n_jobs=1, n_trials=1)

###############################################################################
# To simulate the dipole, we will use the MPI backend. This will
# start the simulation across the number of processors (cores)
# specified by n_procs using MPI. The 'mpiexec' launcher is for
# openmpi, which must be installed on the system
from hnn_core import MPIBackend
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to do:

from hnn_core.parallel import MPIBackend

but I'm fine pushing this to another PR

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then the user has to know about .parallel and structuring of the files.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no strong feeling. But we shouldn't put everything in global namespace. We can document some of the stuff in API and through examples, so it's just copy-paste for users.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I've just been following your lead on what to name things.


with MPIBackend(n_procs=2, mpi_cmd='mpiexec'):
dpls_sync = simulate_dipole(net_sync, n_trials=1)

dpls_sync[0].plot()
net_sync.plot_input()
2 changes: 1 addition & 1 deletion examples/plot_simulate_somato.py
Expand Up @@ -77,7 +77,7 @@
params = read_params(params_fname)

net = Network(params)
dpl = simulate_dipole(net)
dpl = simulate_dipole(net, n_trials=1)

import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 1, sharex=True, figsize=(6, 6))
Expand Down
5 changes: 1 addition & 4 deletions hnn_core/__init__.py
@@ -1,10 +1,7 @@
from .utils import load_custom_mechanisms

load_custom_mechanisms()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you try building the documentation? Does it complete without throwing any errors? Do the html files build? I am still struggling to install mpi4py so I can't check unfortunately.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, building the documentation worked for me.


from .dipole import simulate_dipole, read_dipole
from .feed import ExtFeed
from .params import Params, read_params
from .network import Network, Spikes, read_spikes
from .pyramidal import L2Pyr, L5Pyr
from .basket import L2Basket, L5Basket
from .parallel_backends import MPIBackend, JoblibBackend
4 changes: 2 additions & 2 deletions hnn_core/cell.py
Expand Up @@ -361,9 +361,9 @@ def parconnect_from_src(self, gid_presyn, nc_dict, postsyn):
nc : instance of h.NetCon
A network connection object.
"""
from .parallel import pc
from .neuron import PC

nc = pc.gid_connect(gid_presyn, postsyn)
nc = PC.gid_connect(gid_presyn, postsyn)
# calculate distance between cell positions with pardistance()
d = self._pardistance(nc_dict['pos_src'])
# set props here
Expand Down