Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extended rate-model framework #858

Merged
merged 47 commits into from Feb 6, 2018
Merged

Extended rate-model framework #858

merged 47 commits into from Feb 6, 2018

Conversation

@janhahne
Copy link

@janhahne janhahne commented Nov 21, 2017

This PR extends the functionality of the rate-model framework by several options that are needed to for more complex rate models. All extensions and changes are implement such that previous simulation scripts do not need to be adjusted and all added models are actually needed in ongoing projects. The credit of this PR mostly belongs to @MSenden

Quick overview:

  • the rate_neuron_ipn model now has a parameter lambda for the passive decay rate and the possibility to restrict the rate to positive values via a boolean parameter rectify_output.

  • the rate_neuron_ipn class is extended such that it allows to build models with multiplicative coupling with different factors for excitatory and inhibitory coupling. The template class now has three functions: nonlinearities_.input (previously operator()), nonlinearities_.mult_coupling_ex and nonlinearities_.mult_coupling_in. So the type of multiplicative coupling can be defined in the template derived classes. return 1; achives nonmutliplicative coupling. In order to keep the number of models low the multiplicative coupling version of a model can be implemented in the same NEST model as the nonmultiplicative coupling version and can be switched on via the boolean mult_coupling (see lin_rate.h)

  • A new template class parrot_rate_neuron was added. The functionality is similar to that of parrot_neuron but for secondary events. See the documentation in the file for details.

  • the threshold_lin_rate model is now implemented in a more general way and allows further settings.

  • three additional models (gauss_rate, sigm_rate and sigmoid_rate) have been added.

  • a decision making example that uses some of the new functionality was added

  • the rate model unit test was extended to check the lin_rate model with multiplicative coupling.

jschuecker and others added 30 commits Nov 2, 2017
Copy link
Contributor

@ddahmen ddahmen left a comment

Very nice work @MSenden , @janhahne , @jschuecker! It is great to see that people have developed so many new features for rate models in NEST. I think you integrated them in nice and concise way.

I have a few minor suggestions regarding naming of models and definitions of parameters and their defaults.

In addition, I have two main points concerning this PR (details: see comments in files):

  1. Is the new model 'parrot_rate_neuron' necessary as a separate class? Is the name maybe misleading as its output is a transformed version of its input? I would also like to see what @heplesser thinks about this issue.

  2. I have the feeling that the splitting into excitatory and inhibitory inputs in rate_neuron_ipn/opn changes the neuron behavior for linear_summation=True with respect to the implementation in nest::master. Please check whether my intuition is correct. If so, you would need to find another way of implementing this, as your proposal is a bit unintuitive because excitatory and inhibitory inputs would not be summed linearly for linear_summation=True. This actually would mean that many neuron models would not be captured anymore by the template class. I hope that I am wrong ;)

+ V_.input_noise_factor_ * S_.noise_;

if ( called_from_wfr_update ) // use get_value_wfr_update to keep values in
// buffer
{
if ( P_.linear_summation_ )
{
S_.rate_ += V_.P2_ * gain_( B_.delayed_rates_.get_value_wfr_update(
lag ) + B_.instant_rates_[ lag ] );
S_.rate_ += V_.P2_

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

You changed the behavior of the model here by splitting excitatory and inhibitory inputs:

Before it was f(exc_input + inh_input)
now it is f(exc_input) + f(inh_input) which is of course different for a nonlinear function f.
Therefore, even if linear_summation is set to True, there is no linear summation between exc. and inh. inputs.

This comment has been minimized.

@janhahne

janhahne Jan 15, 2018
Author

@ddahmen That is a really good catch! Of course you are right about this - the new version as is would be inconsistent to NEST 2.14 for linear_summation = True with a nonlinear model with both inhibitory and excitatory inputs and it even disables the use of a "joint linear summation".

@jschuecker @MSenden We will need to think about how to resolve this in more detail.

This comment has been minimized.

@janhahne

janhahne Jan 17, 2018
Author

@ddahmen Mario and I resolved this issue by moving the check for multiplicative coupling to the base class in ff74e56. Now everything should be consistent with NEST 2.14 again.

The diff of ff74e56 might look a bit confusing, since I rearranged some parts of the code to avoid code duplication (it is easier to follow the changes by looking at the new file).

S_.rate_ += V_.P2_
* nonlinearities_.mult_coupling_in( new_rates[ lag ] )
* nonlinearities_.input( B_.delayed_rates_in_.get_value( lag )
+ B_.instant_rates_in_[ lag ] );
}

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

see comment above with respect to changed behavior of linear summation

This comment has been minimized.

@janhahne

janhahne Jan 17, 2018
Author

should be resolved with ff74e56

instant_rates_ex_; //!< buffer for rate vector received
// by RateConnectionInstantaneous from excitatory neurons
std::vector< double >
instant_rates_in_; //!< buffer for rate vector received
std::vector< double > instant_rates_; //!< buffer for rate vector received

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

The variable 'instant_rates_' is not needed anymore. You probably just forgot to delete the line.

This comment has been minimized.

@janhahne

janhahne Jan 15, 2018
Author

Oh, good catch! Corrected with 6ecc124

S_.rate_ += V_.P2_
* nonlinearities_.mult_coupling_in( new_rates[ lag ] )
* nonlinearities_.input( B_.delayed_rates_in_.get_value_wfr_update(
lag ) + B_.instant_rates_in_[ lag ] );
}

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

see comment for rate_neuron_ipn with respect to changed behavior of linear_summation=True

This comment has been minimized.

@janhahne

janhahne Jan 17, 2018
Author

should be resolved with ff74e56

This comment has been minimized.

@ddahmen

ddahmen Jan 18, 2018
Contributor

Nice fix. This makes the code consistent with the old behavior. The only issue I still have is the following:
If I thought of the simplest nonlinear rate model with multiplicative coupling, I would write down a term like

f(rate)*g(input)

where f = mult_coupling_ex = mult_coupling_in and input = exc_input + inh_input.
This model is not captured by your code which is OK. But I think that the documentation of the models should clarify this. In particular it should become clear to the user that in case of multiplicative coupling and mult_coupling_ex = mult_coupling_in linear summation still does not sum up excitatory and inhibitory inputs linearly.
This would also maybe suggest to rename the boolean linear_summation to make it less confusing.
What do you think?
Sorry for being so picky...

This comment has been minimized.

@janhahne

janhahne Jan 22, 2018
Author

@ddahmen You are right that this simplest multiplicative coupling can only be achieved with a trick (connect all incoming inhibitory connections with a lin_rate_parrot neuron with the desired weight and then connect this helper neuron with the target neuron with weight 1.0 - then the mult_coupling_ex is the joint factor for the multiplicative coupling).
We can include this in the documentation somewhere - however at the present time I don't know where to put it: The only model that has multiplicative coupling implemented is the lin_rate neuron, for which g(input_exc) + g(input_inh) = g(input_exc + input_inh) holds anyway.

I would also suggest to keep the name linear_summation, for consistency to 2.14 and since - given two different multiplicative coupling factors for exh and inh coupling - it should be clear that the linear summation can only happen within the two subsets (or is there some way that I do not think of at the moment?)

This comment has been minimized.

@ddahmen

ddahmen Jan 23, 2018
Contributor

You are right that the naming better be consistent with NEST 2.14. I also agree that for lin_rate the above mentioned problem is irrelevant. The only problem could be that in the future somebody will create another neuron model from the templates and does not keep the above issue in mind. One could help future developers by adding the following sentence to the documentation of rate_neuron_opn_impl and rate_neuron_ipn_impl:

"The boolean parameter linear_summation
determines whether the input function is applied to the summed up incoming
connections (= True, default value) or to each input individually (= False). In case of multiplicative coupling the nonlinearity is applied separately to the summed excitatory and inhibitory inputs if linear_summation=True."

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

I added the suggested additional comment

S_.rate_ += V_.P2_
* nonlinearities_.mult_coupling_in( new_rates[ lag ] )
* nonlinearities_.input( B_.delayed_rates_in_.get_value( lag )
+ B_.instant_rates_in_[ lag ] );
}

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

see comment for rate_neuron_ipn with respect to changed behavior of linear_summation=True

This comment has been minimized.

@janhahne

janhahne Jan 17, 2018
Author

should be resolved with ff74e56

insert_( names::rate, &nest::sigmoid_rate_ipn::get_rate_ );
insert_( names::noise, &nest::sigmoid_rate_ipn::get_noise_ );
}

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

no parrot for this neuron model?


template <>
void RecordablesMap< sigmoid_rate_ipn >::create();

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

no parrot for this neuron model?

}

typedef rate_neuron_ipn< nest::nonlinearities_tanh_rate > tanh_rate_ipn;
typedef rate_neuron_opn< nest::nonlinearities_tanh_rate > tanh_rate_opn;

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

no parrot for this neuron model?

with Heaviside function H.
threshold_lin_rate is an implementation of a nonlinear rate model with input
function input(h) = min( max( g * ( h - theta ), 0 ), alpha ). Here H denotes
the Heaviside function. The boolean parameter linear_summation determines

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

Delete sentence with Heaviside function as formula does not contain it anymore.

This comment has been minimized.

@janhahne

janhahne Jan 15, 2018
Author

Corrected with 6ecc124

@@ -0,0 +1,287 @@
/*
* parrot_rate_neuron.h

This comment has been minimized.

@ddahmen

ddahmen Jan 12, 2018
Contributor

I find the name 'parrot_rate_neuron' a bit misleading.
In NEST, a parrot neuron is one that exactly repeats what it gets in the input.
Here you apply a nonlinear transformation. So I would rather think of it as a rate neuron with time constant tau=0.
tau=0 is currently not possible in the implementation of rate_neuron_ipn etc.
But would it not be possible to insert some if statements in the definitions of the propagators of rate_neuron_ipn to enable tau=0? Then you could spare this new model here. Although it might be also good to have these two cases, tau>0 and tau=0, separate as in the former case the neuron has some internal dynamics and in the latter case not.
If you decide for keeping this new model, I would suggest to rename it to something like 'rate_nonlinearity'.

Also keep in mind that the behavior for linear_summation=True is different here from rate_neuron_ipn/opn because here you do not keep excitatory and inhibitory inputs separate.

This comment has been minimized.

@janhahne

janhahne Jan 15, 2018
Author

You are right, that the parrot_rate_neuron only repeats its input in the linear version, i.e. lin_rate_parrot. All other versions do indeed apply the corresponding nonlinearity to the input.
I like the name as it gives a hint what the neuron is used for, which is basicly the buffering (and here also the modifcation) of its received input. Anyway if @heplesser also agrees that the name is misleading or misplaced here we could indeed change it.

You are also right that the class basicly represents and behaves like a rate_neuron_ipn with tau=0. As far as I remember the main reasons why we implemented this in a seperate class (besides the reason that the integration into rate_neuron_ipn would require a couple of new if-statements, which would make the code even more complex) is that it makes simulation scripts much easier to read: It is much more intuitive to think of those neurons as buffer or helper neurons than to classify them as rate_neuron_ipn neurons with tau=0. Did I get this right @MSenden?

This comment has been minimized.

@MSenden

MSenden Jan 16, 2018
Contributor

@janhahne yes, you are right. In the model whose implementation drove my choice for rate neuron types, some neurons would apply different nonlinearities to inputs originating from different sources. To handle this the outputs of the different sources are passed through parrot neurons applying the desired nonlinearities before they reach a linear rate neuron. @ddahmen You are also right that this is equivalent as using a rate neuron with the desired nonlinearity and setting tau to zero but when building a model I was under the impression that the code is easier to follow if a source's output is passed through a helper unit explicitly dedicated to instantaneously applying a nonlinearity than having to recognize another neuron as doing precisely this based on its parameter settings.

However, there is indeed no use-case for lin_rate_parrot as it does not apply a nonlinearity

This comment has been minimized.

@ddahmen

ddahmen Jan 18, 2018
Contributor

I agree that a model is much easier to understand if the 'parrot_rate_neuron' is not merged with the other types of neurons. So only the naming problem remains. What do you think @heplesser ?

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

@janhahne @MSenden @ddahmen Thank you for discussing this point in detail! If I understand the discussion correctly, then you use the "parrots" to transform rate signals on their way from a sender neuron to a target neuron. You implement this transformation as a separate network node to avoid complexities in input handling in the target neuron. This seems sensible, but I agree with @ddahmen that the naming is unfortunate, as a parrot should indeed repeat its input perfectly. I am also a bit confused by the use of the term "buffer" in the discussion, as my understanding is that we are here talking about instantaneous transformations.

I would suggest rate_transformer_<func> as name, where <func> describes the transformation function applied. If the linear version does nothing, I would drop it, but otherwise I would agree to provide all possible transformeres if adding them only requires an additional template instantiation.

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

I like your suggestion and implemented the change of the model name(s). Using the term "buffer" in the discussion was indeed unfortunate - i meant a "helper" neuron.

@janhahne
Copy link
Author

@janhahne janhahne commented Jan 15, 2018

@ddahmen Thank you for the careful review. I resolved some of your requests with the two new commits. Regarding some other points I replied to your comments and wait for further input from you, the second reviewer @heplesser or @MSenden.

I will think about how to resolve the inconsistency/loss of functionality due to the input splitting/multiplicative coupling.

Copy link
Contributor

@ddahmen ddahmen left a comment

I approve this pull request. Residual open issues are concerning naming problems and whether all rate neurons should have parrot versions already integrated. These issues should be discussed with @heplesser .

Copy link
Contributor

@heplesser heplesser left a comment

@janhahne Nice work, apologies for my late response. My comments mostly concern names and documentation. But @ddahmen remarked somewhere that the separate handling of ex and in changed behavior relative to earlier versions. That is problematic. Could you comment?

@@ -36,6 +36,7 @@ set( models_sources
dc_generator.h dc_generator.cpp
diffusion_connection.h
gamma_sup_generator.h gamma_sup_generator.cpp
gauss_rate.h gauss_rate.cpp

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

Please keep file names in alphabetical order, gauss after gap

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

Good catch!

@@ -73,6 +74,7 @@ set( models_sources
music_message_in_proxy.h music_message_in_proxy.cpp
noise_generator.h noise_generator.cpp
parrot_neuron.h parrot_neuron.cpp
parrot_rate_neuron.h parrot_rate_neuron_impl.h

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

Should become something like rate_transformer_node.h, see my comment in the "parrot" discussion.

gauss_rate is an implementation of a nonlinear rate model with input
function input(h) = g * exp( -( x - mu )^2 / ( 2 * sigma^2 ) ).
The boolean parameter linear_summation determines whether the input
function is applied to the summed up incoming connections

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

Delete "up"

function input(h) = g * exp( -( x - mu )^2 / ( 2 * sigma^2 ) ).
The boolean parameter linear_summation determines whether the input
function is applied to the summed up incoming connections
(= True, default value) or to each input individually (= False).

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

Since you explain linear summation in a note below, you can keep the text here shorter, along the lines of "Input transformation can either be applied to individual inputs or to the sum of all inputs.

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

Agreed. Applied this for all models

mean double - Mean of Gaussian white noise.
std double - Standard deviation of Gaussian white noise.
g double - Gain parameter
g double - Linear factor in input function.

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

Why not call it gain here as in other models?

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

Reasonable question - i changed it back to avoid confusion

Sends: InstantaneousRateConnectionEvent, DelayedRateConnectionEvent
Parameters:
No parameters to be set in the status dictionary.

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

But if the "parrot" can be instantiated with different nonlinearities to result in different transformations, don't those transformations have parameters, making this comment misleading?

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

Yes, good catch. I updated the documentation.

Author: Mario Senden, Jan Hahne, Jannis Schuecker
FirstVersion: November 2017
*/
template < class TNonlinearities >

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

typename instead of class is preferred for template arguments.

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

class seems to be used in many templates at the moment (i looked for example in rate_neuron_ipn/opn and the binary neuron). Maybe this can rather be fixed in an separate PR for all NEST files?

}
}

if ( not called_from_wfr_update )

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

Why not else, isn't this exactly the negation of the previous if condition?

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

You probably missed the end of the for-loop in between ;)

function input(h) = g / ( 1. + exp( -beta * ( h - theta ) ) ).
The boolean parameter linear_summation determines whether the input
function is applied to the summed up incoming connections
(= True, default value) or to each input individually (= False).

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

See comment on documentation above.

@@ -0,0 +1,166 @@
# -*- coding: utf-8 -*-

This comment has been minimized.

@heplesser

heplesser Jan 25, 2018
Contributor

I would go for lowercase _dm in the file name.

This comment has been minimized.

@janhahne

janhahne Jan 26, 2018
Author

I agree, looks better with lowercase

@janhahne
Copy link
Author

@janhahne janhahne commented Jan 26, 2018

@heplesser Thank you for the careful review! I addressed all your comments, please have another look.

The inconsistency was removed with one of the commits following on Davids suggestion: Now the behavior is completely consistent with NEST 2.14.0 (in case of non-multiplicative coupling - which is the only coupling available in 2.14.0 - the exc+inh input is summed up before the nonlinearity is applied).

@janhahne janhahne force-pushed the janhahne:merge_grossberg branch from ff4c0e1 to 73c45aa Jan 27, 2018
@heplesser
Copy link
Contributor

@heplesser heplesser commented Feb 6, 2018

@janhahne @MSenden @jschuecker @ddahmen Thanks a lot for your efforts! This looks fine now and I will merge it right away.

@heplesser heplesser merged commit 7d4b03c into nest:master Feb 6, 2018
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@janhahne janhahne deleted the janhahne:merge_grossberg branch Feb 7, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

6 participants
You can’t perform that action at this time.