New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Enable MPI parallelism with ParallelContext #79
Changes from 43 commits
54af4fe
d328e8b
ba72481
ddb05a3
22ebede
8681f37
485a2fc
cdfc227
e31a295
e4f63a1
386f23d
5fe137d
bfaa4da
1f7198f
695be53
4e47615
3866b26
1a3bcba
cc156a3
8391995
ed881ec
c4aec61
944fb9f
1317301
d28a632
1cc993d
0441020
6947591
0387006
88d1b09
65e39fd
0a4bbe6
2e2d5f6
7f6226c
007f4ef
bfe9429
b77b400
91b0455
5aef954
4023099
4639978
cb8dea4
0873f70
c039dcf
c61d265
b3aaae2
2830272
6fb8440
fdd6857
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -28,7 +28,11 @@ Dependencies | |||||
* scipy | ||||||
* numpy | ||||||
* matplotlib | ||||||
* joblib (optional for parallel processing) | ||||||
* joblib (optional for running trials simultaneously) | ||||||
* mpi4py (optional for running each trial in parallel across cores). Also depends on: | ||||||
|
||||||
* openmpi or other mpi platform installed on system | ||||||
* psutil | ||||||
|
||||||
Installation | ||||||
============ | ||||||
|
@@ -37,9 +41,7 @@ We recommend the `Anaconda Python distribution <https://www.continuum.io/downloa | |||||
|
||||||
$ conda install numpy matplotlib scipy | ||||||
|
||||||
For joblib, you can do:: | ||||||
|
||||||
$ pip install joblib | ||||||
For using more than one CPU core, see :ref:`Parallel backends` below. | ||||||
|
||||||
Additionally, you would need Neuron which is available here: `https://neuron.yale.edu/neuron/ <https://neuron.yale.edu/neuron/>`_ | ||||||
|
||||||
|
@@ -64,6 +66,77 @@ To check if everything worked fine, you can do:: | |||||
|
||||||
and it should not give any error messages. | ||||||
|
||||||
.. _Parallel backends: | ||||||
|
||||||
Parallel backends | ||||||
================= | ||||||
|
||||||
Two options are available for making use of multiple CPU cores. The first runs multiple trials in parallel with joblib. Alternatively, you can run each trial across multiple cores to reduce the runtime. | ||||||
|
||||||
Joblib | ||||||
------ | ||||||
|
||||||
This is the default backend and will execute multiple trials at the same time, with each trial running on a separate core in "embarrassingly parallel" execution. Note that with only 1 trial, there will be no parallelism. | ||||||
|
||||||
**Dependencies**:: | ||||||
|
||||||
$ pip install joblib | ||||||
|
||||||
**Usage**:: | ||||||
|
||||||
dpls = simulate_dipole(net, n_trials=2) | ||||||
|
||||||
MPI | ||||||
------ | ||||||
|
||||||
This backend will use MPI (Message Passing Interface) on the system to split neurons across CPU cores (processors) and reduce the simulation time as more cores are used. | ||||||
|
||||||
**Linux Dependencies**:: | ||||||
|
||||||
$ sudo apt-get install libopenmpi-dev openmpi-bin | ||||||
$ pip install mpi4py psutil | ||||||
|
||||||
**MacOS Dependencies**:: | ||||||
|
||||||
$ conda install yes openmpi mpi4py | ||||||
$ pip install psutil | ||||||
|
||||||
**MacOS Environment**:: | ||||||
|
||||||
$ export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib | ||||||
|
||||||
Alternatively, run the commands below will avoid needing to run the export command every time a new shell is opened:: | ||||||
|
||||||
cd ${CONDA_PREFIX} | ||||||
mkdir -p etc/conda/activate.d etc/conda/deactivate.d | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should be:
Suggested change
I think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The path is supposed to be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. indeed :-) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. which is accomplished by the current code |
||||||
echo "export OLD_LD_LIBRARY_PATH=\$LD_LIBRARY_PATH" >> etc/conda/activate.d/env_vars.sh | ||||||
echo "export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:\${CONDA_PREFIX}/lib" >> etc/conda/activate.d/env_vars.sh | ||||||
echo "export LD_LIBRARY_PATH=\$OLD_LD_LIBRARY_PATH" >> etc/conda/deactivate.d/env_vars.sh | ||||||
echo "unset OLD_LD_LIBRARY_PATH" >> etc/conda/deactivate.d/env_vars.sh | ||||||
|
||||||
**Test MPI**:: | ||||||
|
||||||
$ mpiexec -np 2 nrniv -mpi -python -c 'from neuron import h; from mpi4py import MPI; \ | ||||||
print("Hello from proc %d" % MPI.COMM_WORLD.Get_rank()); \ | ||||||
h.quit()' | ||||||
numprocs=2 | ||||||
NEURON -- VERSION 7.7.2 7.7 (2b7985ba) 2019-06-20 | ||||||
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2018 | ||||||
See http://neuron.yale.edu/neuron/credits | ||||||
|
||||||
Hello from proc 0 | ||||||
Hello from proc 1 | ||||||
|
||||||
Verifies that MPI, NEURON, and Python are all working together. | ||||||
|
||||||
**Usage**:: | ||||||
|
||||||
from hnn_core import MPIBackend | ||||||
|
||||||
# set n_procs to the number of processors MPI can use (up to number of cores on system) | ||||||
with MPIBackend(n_procs=2): | ||||||
dpls = simulate_dipole(net, n_trials=1) | ||||||
|
||||||
Bug reports | ||||||
=========== | ||||||
|
||||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,6 +9,7 @@ | |
|
||
# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr> | ||
# Sam Neymotin <samnemo@gmail.com> | ||
# Blake Caldwell <blake_caldwell@brown.edu> | ||
|
||
import os.path as op | ||
import tempfile | ||
|
@@ -63,6 +64,16 @@ | |
|
||
params.update({'sync_evinput': True}) | ||
net_sync = Network(params) | ||
dpls_sync = simulate_dipole(net_sync, n_jobs=1, n_trials=1) | ||
|
||
############################################################################### | ||
# To simulate the dipole, we will use the MPI backend. This will | ||
# start the simulation across the number of processors (cores) | ||
# specified by n_procs using MPI. The 'mpiexec' launcher is for | ||
# openmpi, which must be installed on the system | ||
from hnn_core import MPIBackend | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it's better to do: from hnn_core.parallel import MPIBackend but I'm fine pushing this to another PR There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Then the user has to know about There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. no strong feeling. But we shouldn't put everything in global namespace. We can document some of the stuff in API and through examples, so it's just copy-paste for users. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Okay, I've just been following your lead on what to name things. |
||
|
||
with MPIBackend(n_procs=2, mpi_cmd='mpiexec'): | ||
dpls_sync = simulate_dipole(net_sync, n_trials=1) | ||
|
||
dpls_sync[0].plot() | ||
net_sync.plot_input() |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,10 +1,7 @@ | ||
from .utils import load_custom_mechanisms | ||
|
||
load_custom_mechanisms() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. could you try building the documentation? Does it complete without throwing any errors? Do the html files build? I am still struggling to install mpi4py so I can't check unfortunately. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, building the documentation worked for me. |
||
|
||
from .dipole import simulate_dipole, read_dipole | ||
from .feed import ExtFeed | ||
from .params import Params, read_params | ||
from .network import Network, Spikes, read_spikes | ||
from .pyramidal import L2Pyr, L5Pyr | ||
from .basket import L2Basket, L5Basket | ||
from .parallel_backends import MPIBackend, JoblibBackend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do I try your PR on mac? I tried this command to install and it's stuck for the last 15 minutes ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the most part these commands work for me. Very well could have a different build environment though. Could you send me any output you get after this?
Note, I had to do: