Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Record conductance from aeif_cond_beta_multisynapse model #812

Merged
merged 27 commits into from Mar 14, 2018

Conversation

Projects
None yet
5 participants
@lucianopaz
Copy link
Contributor

lucianopaz commented Aug 22, 2017

This pull request fixes issue #800. It implements the DynamicRecordablesMap and a DynamicUniversalDataLogger classes. The only difference with the non-dynamic classes is that the DataAccessFct is a pointer to a Functor instance. This Functor is a public class, whose name is DataAccessFunctor, which is nested inside the HostNode, and has the operator() that returns a double. The DataAccessFunctor class has a state variable that is set upon construction and tells it where to search for any recordable in the HostNode state.

The current implementation only focuses on the aeif_cond_beta_multisynapse model but it could be extended to all multisynapse models. At the model's source, the recordablesMap is no longer static, it is changed when the setState is called on the HostNode, and it includes the receptor port conductance channels "g_1", "g_2", ... , "g_n_receptors".

The unittests/test_aeif_cond_beta_multisynapse.sli was also modified to test that the recordables where responding properly to state changes, and also whether the g channels had beta or alpha function dynamics.

lucianopaz added some commits Aug 7, 2017

First commit. Implemented most of the code but having problems with u…
…niversal_data_logger.h and universal_data_logger_impl.h. They are not compatible with previously implemented models because of failures in template DynamicRecordablesMap subsititution. In particular in node_access_dynamic_
Fixed DynamicRecordablesMap, DynamicUniversalLogger, aeif_cond_beta_m…
…ultisynapse to be able to record from runtime setted synaptic conductances. Entire source compiles without warnings.
Merge branch 'iss800'
This branch has the full working implementation of the solution for issue #800. The fork is now ready to be pulled in.
Moved tests for issue #800 from regressiontests/issue-800.sli to unit…
…tests/aeif_cond_beta_multisynapse.sli
@heplesser

This comment has been minimized.

Copy link
Contributor

heplesser commented Sep 4, 2017

@lucianopaz Thank you for your pull request! I have had a first look and it looks quite promising. Have you done any benchmarks to see whether there is a speed difference between the dynamic and plain Map/Logger? The dynamic approach needs more memory, since each neuron now has to have its own recordables map. But compared to the size of the ring buffer for each synapse, this is rather negligible.

I will get back to you with detailed comments soon. In the meantime, could you send me a signed Contributor License Agreement (http://nest.github.io/nest-simulator/index) by email?

@lucianopaz

This comment has been minimized.

Copy link
Contributor Author

lucianopaz commented Sep 6, 2017

@heplesser I sent you the filled contributor license agreement to your email. Sorry, I did not do any benchmarks for the pull request. I will do some now and post the results here.

@lucianopaz

This comment has been minimized.

Copy link
Contributor Author

lucianopaz commented Sep 7, 2017

I've run some benchmarks to test how this PR affects time performance and memory usage. I'm uploading a zip with the code I used to run the benchmarks and the results I got.

I compared this PR's performance against the 2.12.0 stable release. The experiment I ran was to construct a population of aeif_cond_beta_multisynapse of N neurons, with different numbers of n_receptors. All the neurons were independent (i.e. not connected between each other) and received the same poisson train of spikes broadcasted from a parrot neuron to different receptor ports. The receptor ports were taken from a uniform_int distribution. I also connected a single multimeter to all neurons to record from V_m and w (to be fair with the non PR that is unable to record from any g). The multimeter did not write the recordings to disk. I then simulated the network for a total time T.

I compared the build time, the simulation time and the total wallclock time for different N and n_receptors values. T was always 1000ms and I used the same globalRNG seed for all simulations.
As for the memory usage, I measured the peak Resident Set Size (RSS) and Virtual Memory Size (VSZ) using ps. Surely, the memory measurements could be improved. I did a single run for each parameter set to get a quick result but I will run the benchmarks averaging over 3 iterations and post the results here later.

I ran all experiments on Ubuntu 16.04 LTS 64bits, kernel 4.10.0-33-generic, Intel Core i7-5820K CPU 3.30GHz × 12, 31GB RAM and 189MB Swap.

benchmark1

My results show that simulation time is almost not affected (max difference of about ~3%), leading also to an almost negligible difference in total wallclock time. However, buildtime is greatly affected. The PR's build time increases much more rapidly with N and n_receptors than the current release of nest, leading to almost 14 times slower builds for large networks with large numbers of receptors.

As for memory usage both RSS and VSZ are increased for the PR's code. However, the maximum increase in RSS is ~3% and ~0.13% in VSZ, so it does not seem to make a big impact.


Update

I ran the benchmarks iterating each condition 3 times with the same seed and got essentially the same results as before. I'm posting the plots from these tests below.

benchmark2

@heplesser

This comment has been minimized.

Copy link
Contributor

heplesser commented Oct 30, 2017

@janhahne Would you be able to review this in the near future?

@janhahne

This comment has been minimized.

Copy link
Contributor

janhahne commented Oct 30, 2017

@heplesser I can take a look on thursday. I am however not particular familiar with the code that is changed here.

@janhahne
Copy link
Contributor

janhahne left a comment

@lucianopaz The PR looks fine to me. I am just a bit confused about the changes in libnestutil/ and to test_siegert_neuron.py. Those seem to be unrelated formatting issues, right? For the libnestutil/ files the changes look fine (previously there were lines with more than 80 characters. @heplesser Is there no style check for those files?), but for the test_siegert_neuron.py they should probably be reverted.

Otherwise code is well written and the concept looks fine to me. I am however not that familiar with this part of the code, so if there are any reasons why these changes might be problematic I might not see them from my point of view.

@lucianopaz

This comment has been minimized.

Copy link
Contributor Author

lucianopaz commented Nov 2, 2017

@janhahne Yes, the changes in libnestutil and test_siegert_neuron.py were just unrelated formatting issues that made the automatic style checks fail on my machine.

@janhahne
Copy link
Contributor

janhahne left a comment

@lucianopaz Thank you for the clarification! I left a comment regarding the formatting of test_siegert_neuron.py, otherwise I am all happy 👍

@@ -57,7 +58,6 @@ def setUp(self):
nest.ResetKernel()
nest.SetKernelStatus(
{'resolution': self.dt, 'use_wfr': False, 'print_time': True})

This comment has been minimized.

@janhahne

janhahne Nov 2, 2017

Contributor

To me it seems odd to to remove this line instead of (old) line 62. The same is true for line 75, where probably line 77 should be removed instead. Could you edit this manually and see if it still passes the sytle check?

Removed empty line after set up comments
Following @janhahne's suggestion, the empty lines after the set up driven iaf and siegert neuron comments were removed, not the ones before.
@lucianopaz

This comment has been minimized.

Copy link
Contributor Author

lucianopaz commented Nov 6, 2017

@janhahne I ran the style check for the changed test_siegert_neuron.py file, and it passed successfully. For the initial file version in this PR, I had used pep8ify to make it comply with pep8 styling, but the automatic tool is clearly not perfect.

@heplesser
Copy link
Contributor

heplesser left a comment

@lucianopaz Your code looks correct to me, but in several places you seem to repeat a lot of code where inheritance or templatization could have kept code more compact. I have marked some locations in the text.

/**
* Here is where we must update the recordablesMap_ if new receptors
* are added!
*/

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

Should be a normal comment, not a Doxygen comment.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Done in eb41277

}
}
else if ( ptmp.E_rev.size()
< P_.E_rev.size() ) // Number of receptors decreased

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

It would be better to have the comparison on one line and the comment on another.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Done in eb41277

@@ -408,3 +408,12 @@ nest::NumericalInstability::message() const
<< "updating " << model_ << ".";
return msg.str();
}

std::string
nest::NoEntryToMap::message() const

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

The name of the exception seems misleading. Isn' t this a KeyError?

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Changed to KeyError and made the error message more generic in eb41277

nest::NoEntryToMap::message() const
{
std::ostringstream msg;
msg << "DynamicRecordablesMap::erase( const Name& n ): "

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

With this text, this exception can only be thrown from DynamicRecordablesMap::erase(). It is usually not a good idea to constrain exceptions in this way. For greater flexibility, the function throwing the exception can pass its name to the exception.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Changed to KeyError and made the error message more generic in eb41277

/**
* Universal data-logging plug-in for neuron models.
*
* This class provides logging of universal data such as

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

You should explain early on the difference between the normal and the dynamic logger.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 7, 2017

Author Contributor

Done in 81e1178

@@ -454,9 +500,38 @@ aeif_cond_beta_multisynapse::set_status( const DictionaryDatum& d )
// consistent.
Archiving_Node::set_status( d );

/**

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

This set_status() is so large it should not be inlined and should be moved to the cpp-file.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Done in eb41277


// Utility function that inserts the synaptic conductances to the
// recordables map
inline void insert_conductance_recordables( size_t first = 0 );

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

Remove inline! The functions operation is too complex to inline it; plus, modern compilers are quite good at reading our code.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Done in eb41277

@@ -208,14 +209,62 @@ class aeif_cond_beta_multisynapse : public Archiving_Node
void get_status( DictionaryDatum& ) const;
void set_status( const DictionaryDatum& );

static inline Name

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

Do not inline, make private and move implementation to cpp-file.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Done in eb41277

};

//! Class that reads out state vector elements, used by UniversalDataLogger
class DataAccessFunctor

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

Does this class need to be public? I am in doubt if it is sensible to declare this class here, useful only for aeif_cond_beta_multisynapse. Could one use a template function instead? That would avoid writing a lot of boilerplate code in models.

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

You're right. DataAccessFunctor should be a class template. I moved its implementation to recordables_map.h, because it is used by the DynamicRecordablesMap class template.
I also had to change the way in which DataAccessFunctor instances were created by the aeif_cond_beta_multisynapse, and also how the DataAccessFunctor queried the private state of the HostNode. Now aeif_cond_beta_multisynapse has two new functions get_data_access_functor and get_state_element. The latter returns the value of the recorded state element. The former creates a DataAccessFunctor<aeif_cond_beta_multisynapse>, that is intended as the supplier for the DynamicRecordablesMap.

};

void
set_parent( aeif_cond_beta_multisynapse* parent )

This comment has been minimized.

@heplesser

heplesser Nov 30, 2017

Contributor

Drop this method and put the single statement into all places calling this function. And if it were to exist further, it ought to be private!

This comment has been minimized.

@lucianopaz

lucianopaz Dec 4, 2017

Author Contributor

Done in eb41277

Changes to aeif_cond_beta_multisynapse, exceptions and recordables_ma…
…p. The most important change was to remove the DataAccessFunctor from the neuron model and put its implementation as a class template in recordables_map. Now all the DataAccessFunctor logic is located in the same place as the handling DynamicRecordablesMap class. The rest of the changes are based on @heplesser's comments.

lucianopaz added some commits Dec 7, 2017

Added a clearer explanation on the difference between the UniversalDa…
…taLogger and DynamicUniversalDataLogger classes.

@terhorstd terhorstd added this to the NEST 2.16 milestone Mar 5, 2018

@terhorstd terhorstd requested a review from jougs Mar 5, 2018

@jougs
Copy link
Contributor

jougs left a comment

Can you please revert the formatting changes to the following files? They are external files which we just ship for convenience and want to keep in the original format.

  • libnestutil/compose.hpp
  • libnestutil/hashtable-common.h
  • libnestutil/type_traits.h
Reverted formating of external files: libnestutil/compose.hpp libnest…
…util/hashtable-common.h libnestutil/type_traits.h
@heplesser

This comment has been minimized.

Copy link
Contributor

heplesser commented Mar 8, 2018

@lucianopaz Your code looks very nice now. Apologies for the late reply. I have one additional request though: could you convert the remaining multisynapse models to the Dynamic logger as well? This would ensure a consistent user experience. With that in place, we would be happy to include the dynamic logger in the NEST 2.16 release.

@lucianopaz

This comment has been minimized.

Copy link
Contributor Author

lucianopaz commented Mar 8, 2018

@heplesser, yes of course. I'll start converting the following conductance based models (let me know if I missed one):

  • aeif_cond_alpha_multisynapse
  • gif_cond_exp_multisynapse

I wanted to ask what you prefer to do with the current based models:

  • gif_psc_exp_multisynapse
  • iaf_psc_alpha_multisynapse
  • iaf_psc_exp_multisynapse
    Should we make the i_syn_, and other analogue synaptic current state variables recordable?
@heplesser

This comment has been minimized.

Copy link
Contributor

heplesser commented Mar 8, 2018

@lucianopaz Great! In order to be consistent with what can be recorded from the "plain" iaf_psc_[exp|alpha] models, it would be great to make S_.i_syn_[] recordable in the current-based multisynapse models. At the same time, I would suggest not to support recording "weighted spikes", since that would require an additional vector to buffer data; I believe that recording option was added to the "plain" versions mostly for debugging purposes.

lucianopaz added some commits Mar 8, 2018

Two main contributions. 1) Fixed design inconsistency for multisynaps…
…e models, where the DataFunctor getter function was public. Made it private and befriended the DynamicUniversalDataLogger and DataAccessFunctor classes. 2) Made aeif_cond_alpha_multisynapse use the DynamicUniversalDataLogger and DynamicRecordablesMap.
Made gif_cond_exp_multisynapse use the DynamicRecordablesMapping and …
…DynamicUniversalRecorder. In order to do so, the state members sfa_ and stc_ were moved into the vector y_.
Fixed bug in gif_cond_exp_multisynapse. The bug was caused because th…
…e State_ vector y_ was expanded to include stc_ and sfa_, in order to use the same mechanism for logging multisynapse neurons. This lead to an unexpected issue with gsl_ode. First, the dimension of the step function was wrong because the state y_ was larger than before, and second the *_dynamics function had to provide the derivative of sfa_ and stc_, which were now part of y_. If this was left unaltered, then the f[ S::STC ] was undefined and could lead to non-convergence of the ode, and other unexpected issues.
Made iaf_psc_alpha_multisynapse work with Dynamic loggers. Must revie…
…w if the y_ vector included into State_ can be used for the simulations without performance loss. As it stands now, the memory footprint increased only for the sake of logging.
@heplesser
Copy link
Contributor

heplesser left a comment

@lucianopaz Thank you for the updates! For the iaf_psc... models, we should use a different approach: a smarter access functor, see my inline comment. Copying data just for the sake of recording just makes the model too complex, wastes space and also runtime.

Concerning the gif models, I think we should leave them as they are, i.e., with the classical recordables map, since the number of recordables is fixed in these multisynapse models. I am sorry that I overlooked that previously. Adding values to the state vector, which is iterated by an ODE solver, is a bad idea, since it can confuse the solver and definitely complicates it (more dimensions). Please revert the gif models.

host.get_data_access_functor( iaf_psc_alpha_multisynapse::State_::V_M ) );

insert( names::currents,
host.get_data_access_functor( iaf_psc_alpha_multisynapse::State_::I ) );

This comment has been minimized.

@heplesser

heplesser Mar 10, 2018

Contributor

I think currents was never a good name and it is only used in this one model. I suggest to replace it with I_syn.

iaf_psc_alpha_multisynapse::get_i_syn_name( size_t elem )
{
std::stringstream i_syn_name;
i_syn_name << "i_syn_" << elem + 1;

This comment has been minimized.

@heplesser

heplesser Mar 10, 2018

Contributor

For consistency, we should stick to capital I, i.e., I_syn_x also for the individual currents.

static const size_t NUMBER_OF_FIXED_STATES_ELEMENTS = 2; // V_M, I
static const size_t NUM_STATE_ELEMENTS_PER_RECEPTOR = 1; // I_SYN

std::vector< double > y_; //!< neuron state

This comment has been minimized.

@heplesser

heplesser Mar 10, 2018

Contributor

Introducing this vector only for the sake of recording seems not a good idea, as it costs both memory and runtime. I would suggest to make the accession functor smarter, along the lines of

if ( elem == V_M )
{ return S_.V_m_ }
else 
{ return S_.y2_syn_[elem/2]; }

NB: I am not quite sure if my index arithmetic above is correct, this is just meant as a suggestion.

Reverted gif_*_multisynapse models. Changed iaf_*_multisynapse models…
… logging with a smarter get_data function. Fixed testsuites for reverted gif models. Added small tests for iaf_*_multisynapse, but it could be possible to add a more detailed test of the exp or alpha shape of the currents.
* This enum identifies the elements of the vector. The last element
* of this (I_SYN, ) will be repeated n times at the end of the
* state vector State_::y with n being the number of synapses.
* As a first step this vector will only be used for logging.

This comment has been minimized.

@heplesser

heplesser Mar 12, 2018

Contributor

I think the comment would be easier to understand if you gave the expression for the index of the synapses, i.e. (abbreviated), NUM_FIX_ELEMS + k * NUM_ELEMS_PER_RECEPTOR.

STATE_VECTOR_MIN_SIZE
};

static const size_t NUMBER_OF_FIXED_STATES_ELEMENTS = 2; // V_M, I

This comment has been minimized.

@heplesser

heplesser Mar 12, 2018

Contributor

Instead of initializing with the "magic" value 2 here, initialize with I_SYN to guarantee consistency. Alternatively, you could replace I_SYN in the enum above by NUM_FIXED_STATE_ELEMENTS; then you would not need the additional constant here. Maybe one could then also drop the STATE_VECTOR_MIN_SIZE.

lucianopaz added some commits Mar 12, 2018

Changes in iaf_psc_*_multisynapse models. Fixed comments for enum Sta…
…teVecElems, changed recordable name from currents to I_syn, and set NUMBER_OF_FIXED_STATES_ELEMENTS TO I_SYN
Added test to iaf_psc_*_multisynapse models. These additional tests, …
…verify that the post synaptic current following a spike has an exponential or alpha functional dependence, depending if the model is exp or alpha.
@lucianopaz

This comment has been minimized.

Copy link
Contributor Author

lucianopaz commented Mar 12, 2018

@heplesser, I finished the move to the Dynamic logging for the iaf and aeif multisynapse models, and added some tests for the iaf models to assert that each channel's psc has the correct functional form, in fb5210a. Let me know if you think some extra changes are necessary.

@heplesser

This comment has been minimized.

Copy link
Contributor

heplesser commented Mar 14, 2018

@lucianopaz Thank you very much for your efforts! I will merge now.

@heplesser heplesser merged commit 8de91ad into nest:master Mar 14, 2018

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
@lucianopaz

This comment has been minimized.

Copy link
Contributor Author

lucianopaz commented Mar 14, 2018

@heplesser, it was a pleasure to contribute to the project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.