Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pynest version of hpc benchmark #867

Merged
merged 11 commits into from Mar 23, 2018

Conversation

Projects
None yet
5 participants
@jakobj
Copy link
Contributor

jakobj commented Dec 10, 2017

This PR adds a pynest version of hpc_benchmark.sli to enable systematic benchmarking also via the Python interface.
I tried to stay as close as possible to the original implementation.
I propose @mschmidt87 and @heplesser as reviewers.

@heplesser
Copy link
Contributor

heplesser left a comment

@jakobj Thanks! Please see my comments and the problems raised by Travis.


import numpy as np
import os
# from scipy.special import lambertw

This comment has been minimized.

@heplesser

heplesser Dec 10, 2017

Contributor

Please remove if not needed.

'''
Parameter section
define all relevant parameters: changes should be made here

This comment has been minimized.

@heplesser

heplesser Dec 10, 2017

Contributor

Define

'presimtime': 50., # simulation time until reaching equilibrium
'dt': 0.1, # simulation step
'record_spikes': True, # switch to record spikes of excitatory neurons to file
'path_name': './', # path where all files will have to be written

This comment has been minimized.

@heplesser

heplesser Dec 10, 2017

Contributor

I think the default path name (CWD) should be empty, since the expression for "this directory" may differ from . on non-Unix systems. The slash should definitely not be included.



def convert_synapse_weight(tau_m, tau_syn, C_m):
'''Computes conversion factor for synapse weight from mV to pA. This

This comment has been minimized.

@heplesser

heplesser Dec 10, 2017

Contributor

Start docstrings with a brief single-line description, then details on new line. Here, break after "pA."

# evoked post-synaptic potential is 1.700759 ms.
# For alpha-shaped postynaptic currents, the rise time of the post-synaptic
# potential follows from the synaptic rise time as

This comment has been minimized.

@heplesser

heplesser Dec 10, 2017

Contributor

Add # here to make clear that this is one long comment; same on line 125.

while i < len(nodes):
if nest.GetStatus([nodes[i]], 'local')[0]:
yield nodes[i]
i += nvp

This comment has been minimized.

@heplesser

heplesser Dec 10, 2017

Contributor

This is only correct if we have nodes with proxies, isn't it? That definitely needs to be documented in the docstring.

This comment has been minimized.

@jakobj

jakobj Dec 11, 2017

Author Contributor

I think it should also work for nodes without proxies. The minimal gid separation is always nvp, isn't it? And in case that node is not local, we just start incrementing by one until we find a local node. At least that was the logic I had in mind. Updated the docstring.

This comment has been minimized.

@jakobj

jakobj Dec 11, 2017

Author Contributor

I see now. You are right. This will only work for nodes with proxies. The minimal step size for nodes without proxies could be one.

@mschmidt87
Copy link

mschmidt87 left a comment

Looks generally very good. I just added minor formatting and documentation requests.

This benchmark was originally developed for very large-scale simulations on
supercomputers with more than 1 million neurons in the network and
11.250 incoming synapses per neuron. For such large networks, input to a
single neuron will be little correlated and network activity will remain

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

Correlated across neurons, right?
Perhaps change to "synaptic inputs will have low correlations across neurons"

This comment has been minimized.

@jakobj

jakobj Dec 11, 2017

Author Contributor

Yes, I think this means across neurons. This is the text from hpc_benchmark.sli. Should I also change it there?

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

I would say, yes, though it is a minor point.

supercomputers with more than 1 million neurons in the network and
11.250 incoming synapses per neuron. For such large networks, input to a
single neuron will be little correlated and network activity will remain
stable for long periods of time.

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

stable over long periods of time.

This comment has been minimized.

@jakobj

jakobj Dec 11, 2017

Author Contributor

(see above)


def convert_synapse_weight(tau_m, tau_syn, C_m):
'''Computes conversion factor for synapse weight from mV to pA. This
function is specific to the used neuron model Leaky

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

to the used neuron model which is a leaky ...

'''


def build_network(logger):

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

Add a brief description of the function incl. description of function argument here, something like

'''Build the network including setting of simulation and neuron parameters, creation of neurons and connections.

Parameters

logger : Instance of Logger class
'''


nest.message(M_INFO, 'build_network', 'Creating inhibitory population.')
I_neurons = nest.Create('iaf_psc_alpha', NI) # subnet gets own gid

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

What is the purpose of this comment? You are not creating a subnet here, are you?

logger.log(str(memory_thisjob()) + ' # virt_mem_after_edges')


def run_simulation():

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

Same as above: Please add a brief description of the function.

def compute_rate():
'''compute approximation of average firing rate based on number of
local nodes, number of local spikes and total time
'''

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

There should be one common format for docstrings of functions across the script.
Either like this:
'''...
'''

or
'''
...
'''

# convert rank to string, prepend 0 if necessary to make
# numbers equally wide for all ranks
rank = '{:0' + str(len(str(self.max_rank_log))) + '}'
fn = file_name + '_' + rank.format(nest.Rank()) + '.dat'

This comment has been minimized.

@mschmidt87

mschmidt87 Dec 11, 2017

It would be better to use proper string formatting here. For instance:

fn = "{}_{}.dat".format(file_name, rank.format(nest.Rank))

The same for the log-Function below.

jakobj added some commits Dec 11, 2017

Change logger class to context manager in hpc benchmark
Since the __exit__ method is automatically called upon destruction of
the logger, partial log files are also written for aborted simulations.
@jakobj

This comment has been minimized.

Copy link
Contributor Author

jakobj commented Dec 11, 2017

Thank you for the quick review. I've now implemented almost all suggested changes. I've also changed the logger to a context manager, to make sure file are properly closed also on aborted simulations.

@jakobj

This comment has been minimized.

Copy link
Contributor Author

jakobj commented Dec 20, 2017

Fixed the formatting and adapted doc according to @mschmidt87's suggestions.

@jakobj jakobj force-pushed the jakobj:enh/hpc_benchmark_pynest branch from ae9dbe9 to 4723333 Dec 20, 2017

@terhorstd terhorstd added this to the NEST 2.16 milestone Mar 5, 2018


params = {
'nvp': 1, # total number of virtual processes
'scale': 1., # scaling factor of the network size,

This comment has been minimized.

@heplesser

heplesser Mar 6, 2018

Contributor

Fix indentation of comment, remove comma at end of comment

},

'randomize_Vm': True,
# Note that Kunkel et al. (2014) report different values. The values

This comment has been minimized.

@heplesser

heplesser Mar 6, 2018

Contributor

On the following lines, code and comment are mixed in a strange way. Shouldn't all comments come first?

# b = 1.0 / tau_syn - 1.0 / tau_m
#
# Inverting this equation numerically leads to tau_syn =
# 0.32582722403722841 ms, as specified in model_params below.

This comment has been minimized.

@heplesser

heplesser Mar 6, 2018

Contributor

@mschmidt87 I am a bit confused here, and since you inserted the corresponding comments/code in the SLI version of the benchmark a while ago, I hope you can clarify what is happening here. My first concern is that in the function PSP_rise_time() above a line seems to be missing after the two lines defining a and b---maybe the line defining t_max on line 111 above? As it is written now, there is nothing really to invert.

The other question is why we want to achieve the rise time of 1.700759 ms?

This comment has been minimized.

@mschmidt87

mschmidt87 Mar 8, 2018

I don't quite understand your first question. Which script are you referring to? Is your question where there is a line missing in hpc_benchmark.sli or in hpc_benchmark.py? As far as I can see, this part is pretty much a one-to-one copy of the SLI script, so there doesn't seem to be line missing in either of the scripts.
t_max is defined in line 98 of hpc_benchmark.sli.

The function PSP_rise_time which is commented out here is defined in hpc_benchmark.sli as well, but not used throughout the script. It rather serves as an explanation how to arrive at the hard-coded value of tau_syn (Strangely, tau_syn is defined twice in hpc_benchmark.sli)

Regarding your second question: I am not not sure, to be honest. I assume that this is just a value, that is kind of realistic (for instance, the neurons in the microcircuit model have a rise time of 1.6ms) and leads to the desired firing rate in the network. Perhaps, @suku248 knows more about this?

This comment has been minimized.

@heplesser

heplesser Mar 8, 2018

Contributor

@mschmidt87 I am missing a line in the sli and the py version of hpc_benchmark. The PSP_rise_time function, as far as I can see, only sets local variables a and b, but does not compute any rise time and does not return any value. I assume that the rise time the function is supposed to compute is the t_max expression in function ConvertSynapseWeight. Probably someone here planned to split that function up, forgot to copy a line, and then never followed quite through. If you agree with this interpretation, we could change both versions consistently.
The duplicate definition of /tau_syn looks like a code duplication introduced by a merge.

This comment has been minimized.

@mschmidt87

mschmidt87 Mar 8, 2018

Yes, you are absolutely right, I agree to your interpretation, sorry for being a bit slow-witted here. Line 111 should be moved inside PSP_rise_time, the code then be commented in, and lines 108 and 111 can then be replaced by a call of PSP_rise_time (b is still needed in the rest of the function). Also, t_max can be renamed to t_rise, or at least we should settle on either "time of maximum" or "rise time".

heplesser and others added some commits Mar 9, 2018

Modifications to be closer to the SLI implementation
Plus print function for Python 2.
Merge pull request #46 from heplesser/jakobj-hpc_benchmark_pynest
Polishing py version of hpc_benchmark.
@jakobj

This comment has been minimized.

Copy link
Contributor Author

jakobj commented Mar 23, 2018

With the PRs of @hakonsbm and @heplesser merged into my branch, this should be ready to go to master, please have another look

@heplesser

This comment has been minimized.

Copy link
Contributor

heplesser commented Mar 23, 2018

@jakobj There seem to be a few PEP8 issues left, could you take a look?

@jakobj

This comment has been minimized.

Copy link
Contributor Author

jakobj commented Mar 23, 2018

Done

@heplesser heplesser merged commit 95fc302 into nest:master Mar 23, 2018

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.