Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spike management proposal #330

Closed
Silmathoron opened this issue Dec 9, 2016 · 18 comments
Closed

Spike management proposal #330

Silmathoron opened this issue Dec 9, 2016 · 18 comments

Comments

@Silmathoron
Copy link
Member

Silmathoron commented Dec 9, 2016

This proposal tries to provide a way to minimize the amount of code necessary to manage spike arrival and spike effect while also providing a unique interface for the ODE vs shape definition.

This would only involve one apply_spikes function taking as parameters:

  • the recordable (e.g. I_syn_ex)
  • the spike buffer (e.g. spikesEx)
  • an optional shape function if we use the function definition.

In the update block, you would then call:
apply_spikes(I_syn_ex, spikesEx) or apply_spikes(I_syn_ex, spikesEx, I_shape)

Attached examples in the zip file.
spike_proposal.zip

@DimitriPlotnikov
Copy link
Contributor

@ingablundell What is your opinion?

@ingablundell
Copy link
Contributor

@Silmathoron Thank you for this! I am still not sure this is more intuitive. Your problem with the "curr_sum" notation was, if I remember this correctly, that it is unintuitive to write down a convolution of a function that you don't know. But when you write down the equation plus the initial values the solution will be unique (you won't be able to state anything weird with a computer). So from a mathematical point of view it makes a lot of sense to define the convolution. And I think it is nice to be able to see exactly what the equation is we are solving from looking at the ODE.

@Silmathoron
Copy link
Member Author

Silmathoron commented Dec 12, 2016

@ingablundell that was not really my argument: I was saying that if he provides an ODE and its initial conditions in the equation block, the user expects you to solve this ODE in a step by step fashion, which means he does not expect NESTML to use a step function, thus the use of the convolution.

My objective was:

  1. to reduce the number of necessary codelines (only two in addition to the function definition -- shape or ODE)
  2. to avoid apparent code redundancy (if you do not know the internals of NESTML, how can you understand the necessity for both L74 and L151 for the implicit definition?)
  3. to make the implicit and explicit cases identical in terms of function calls (only apply_spikes(...) function
  4. to make it more intuitive

The last point is arguably not guaranteed since it is based only on my gut feeling and a discussion with Sam during the workshop, which makes it a rather small user panel...

EDIT: maybe the best would be to send a form on NEST_USER to directly get the opinion of the guys who might use NESTML...

@ingablundell
Copy link
Contributor

@Silmathoron @jougs

  1. yes, that would reduce the code a bit.
  2. L151 we don't actually need at all. And are in the process of getting rid of.
  3. In which way do they differ in our notation at the moment?
  4. Maybe you are right, and we should ask on NEST_USER. I also think, if your suggestion is more intuitive to you, it might well be to others. So, in my opinion, we should at least implement both ways. What do you think @jougs? But I am not sure I want to get rid of the convolution notation at the moment, because to me it is a nice clean way of writing it down.

Also: Am I right in assuming that you want no explicit initial values for I_syn_ex'?

@Silmathoron
Copy link
Member Author

Silmathoron commented Dec 12, 2016

@ingablundell ok, removing L151 would at the very least make both implementations more coherent! (which also addresses point 3)

I think asking the users is best... you could implement both, but I think this could easily lead to possibly confusing code constructs...
So either you should make both statements (*_sum and apply_spikes) uncompatible, or find an other way to make sure they are not involved in several places. I must say I'm a priori against this.

Eventually, if you decide to keep these *_sum functions, maybe they should be fused in one unique convolve function which would have a return value of the same type as the shape function...

EDIT: and no, I do want to be able to set initial values for I_syn_ex' (we need it to solve the ODE anyway, it might even be better to make it compulsory, or at least raise a warning if it is not present, to say it was initialized at zero)

@Silmathoron
Copy link
Member Author

Silmathoron commented Dec 12, 2016

I just realized that my proposal might not be as simple to implement as I first thought...
There are two problems:

  • can we always consider (in the implicit case) that the spikes are applied on the highest derivative? I think so, but there could be exceptions, so I'll have to look at other existing synaptic shapes (if there are some)
  • for these custom shapes, we cannot necessarily find the formula of the peak value, so there might be the need for a third "normalization" parameter in the apply_spikes function for the implicit declaration.

Any thoughts/comment on that?

@DimitriPlotnikov
Copy link
Contributor

@Silmathoron
a) This problem is solved by stating all initial values (also which are equals 0). Basically, the problem statement is the same compared to the curr_/cond_sum notation.
b) Can you exemplify more on the problem?

@Silmathoron
Copy link
Member Author

Silmathoron commented Dec 13, 2016

I'm sorry, I did not get your a) point... which issue were you referring to?

Maybe to clarify:

  • for alpha synapses, the spikes must be applied to the first derivative of the current or conductance, and to normalize the peak value to either 1 pA or 1 nS, the spikes are weighted by e / tau_syn
  • for exp synapses, the spikes are applied directly to the current/conductance, so this normalization constant is not needed.

This means that:

  • as long as we are dealing with the standard alpha and exp, we can recognize them and add the spikes to the correct time derivative
  • However, we cannot be sure that the spikes will always have to be added to the highest derivative in the GSL system so we need to make it clear to the user that he/she should specify the right derivative in apply_spikes
  • for the normalization constant, however, we have a problem because we do not know which parameter should be used as the synaptic time constant; therefore, I think the "normalization" parameter is necessary, which would give apply_spikes(I_syn_ex', spikes_ex, normalization=e/tau_syn_ex)

To summarize, it would lead to:

  • psc_exp implicit --> apply_spikes(I_syn_ex, spikes_ex) (normalization is not necessary)
  • cond_exp implicit --> apply_spikes(g_ex, spikes_ex) (normalization is not necessary)
  • psc_alpha implicit --> apply_spikes(I_syn_ex', spikes_ex, normalization=1pA*e/tau_syn_ex) (spikes applied to the 1st derivative)
  • cond_alpha implicit --> apply_spikes(g_ex', spikes_ex, normalization=1nS*e/tau_syn_ex) (spikes applied to the 1st derivative)
  • psc_exp explicit --> apply_spikes(I_syn_ex, spikes_ex, shape=I_shape_ex)
  • cond_exp explicit --> apply_spikes(g_ex, spikes_ex, shape=g_shape_ex)
  • psc_alpha explicit --> apply_spikes(I_syn_ex, spikes_ex, shape=I_shape_ex) (normalization is included in the shape function)
  • cond_alpha explicit --> apply_spikes(g_ex, spikes_ex, shape=g_shape_ex) (normalization is included in the shape function)

@ingablundell
Copy link
Contributor

@Silmathoron

  1. I prefer "convolve" too. We chose *_sum because a lot of people from our institute thought that users might not know what that means.

  2. This is what I was trying to say when you were in Juelich. Either you state
    a) all initial values up to the order n-1 (n being the order of the maximum derivative) or
    b) you must state which equations the sum of delta functions are added to, i.e. which derivatives the spikes are "applied to".

To me b) is a bit weird (maybe because I'm neither a modeler nor a physicist) and we would want it to be possible to apply spikes not only the derivative of maximum order (i.e. n-1 here). This:

psc_alpha implicit --> apply_spikes(I_syn_ex', spikes_ex, normalization=1pA*e/tau_syn_ex) (spikes applied to the 1st derivative)

is definitely a possibility but we would also have to allow calling apply_spikes several times for one "shape", right? E.g. we might want I(0)=0; I'(0)=a; I''(0)=b; a and b being some constants. Then we would have to call apply_spikes for I and I'.

@Silmathoron
Copy link
Member Author

@ingablundell

  1. Ok... well I don't know about that, but anyway even if we keep a name with _sum, I think it is weird to have 2 functions that do the same thing.
  2. I don't understand your "either"... you have to state both a) and b), don't you? I don't see how stating only one of the two would allow you to skip the other.

I think you try to answer this question already in your last sentence... but I really don't understand what you mean...
Why on earth would spikes apply on both I and I'?

@ingablundell
Copy link
Contributor

@Silmathoron

  1. Yes, I agree. Two functions for the same thing is confusing.

  2. In your file, you don't state any initial values for I', right? And if I understood your answer above correctly you don't want to or intend to? The reason for this not being necessary is that what apply_spikes(I', spikes, normalization=1pA*e/tau) implies, is:

I''= ... +\sum_{ti\le t}\sum{k\in spike}w_k \delta(t-t_i) \frac{e}{\tau}
I'= ...

Therefore we don't need to state any intitial values for I'.

About applying spikes on both I and I': No idea. What I meant was just, that I could conceive people stating a number of initial values that don't equal zero. But maybe that wouldn't happen.

@Silmathoron
Copy link
Member Author

I completely agree with you until the "Therefore": we still need to state the initial value of I' otherwise the problem is not closed. Initial values are always necessary for all variables defined in the state block.

I'm not sure I understand your sentence

I could conceive people stating a number of initial values that don't equal zero.

Could you elaborate on that? Do you mean "initialize a state variable with a non-zero value" or "initializing a finite number of state variables"?
Well anyway, as I said before, I think all variables in the state block must be initialized.

@ingablundell
Copy link
Contributor

What do you mean by

otherwise the problem is not closed?

Did you possibly overlook the

\frac{e}{\tau}

in the equation? By adding the deltas we implicitly use initial values 0 for I and e/tau for I'.

I'm a bit confused. Did you forget to state the initial values for I' in your test.nestml file? I am not actually trying to advocate not stating initial values.

@Silmathoron
Copy link
Member Author

I meant that you're missing initial conditions and therefore that your system is not uniquely determined: diracs do not provide the equivalent of initial conditions, so I do not understand your

By adding the deltas we implicitly use initial values 0 for I and e/tau for I'.

Do you mean that in NESTML you set I = 0 and I' = e/tau if the user provides the deltas? If so I must say I find surprising to implicitly initialize variables with missing initial conditions to a non-zero value...
But since the problem is mathematically not uniquely defined without all initial conditions, I think that at least a warning should be raised, stating that, by default, the values are initialized (to zero, if possible).

@DimitriPlotnikov
Copy link
Contributor

DimitriPlotnikov commented Jul 11, 2017

I try to summarize the current state of the discussion regarding spike handling (#368 is related to the current issue). The idea is to change the current nestml syntax with respect to initial values as follows:

a) For every differential equation form the equations-block all initial values (e.g. from the order 0 to the highest order-1) must be explicitly stated.

b) Buffers store the type, cf. #368

c) The shape is associated The return value of the convolve-function is determined through the buffer that is used as the second parameter.

initial_values:
  # ok, it is just a shape. it should/can be unitless
  g_in real   = 0
  g_in' 1/ms = e / tau_syn_in
  ...
end
equations:
  g_in'' = (-1)/(tau_syn_in)**(2)*g_in+(-2)/tau_syn_in*g_in'
  ...
  function I_syn_in pA = convolve(g_in, spikeInh) * (V_m - E_in) 
end

@ingablundell @Silmathoron What do you think?

@Silmathoron
Copy link
Member Author

Silmathoron commented Jul 11, 2017

@DimitriPlotnikov this looks nice to me, except for the unitless part, for which I'm not sure this is a good idea.
Moreover, in your example, g_in'' is not a shape but the actual conductance and is given by a differential equation, so it should definitely be in nS ms^-2 hence it would make more sense to have g8in in nS

@DimitriPlotnikov
Copy link
Contributor

DimitriPlotnikov commented Jul 12, 2017

@Silmathoron
It is sort of a discussion point. Actually, you get the conductance in the convolve(g_in, spikeInh) (since you know the type of the buffer). Beforehand, it is actually just a different notation for an alpha function. But I'm open for any reasonable interpretation (the current state of nestml model corresponds to your suggestion).

I'm even starting to think, that it would be a good idea to mark shape function explicitly. Since it could avoid the confusion and would align to the functional notation of the shape function. E.g.:

initial_values:
  # ok, it is just a shape. it should/can be unitless
  g_in real   = 0
  g_in' 1/ms = e / tau_syn_in
  ...
end
equations:
  shape g_in'' = (-1)/(tau_syn_in)**(2)*g_in+(-2)/tau_syn_in*g_in'
  ...
  function I_syn_in pA = convolve(g_in, spikeInh) * (V_m - E_in) 
end

@DimitriPlotnikov
Copy link
Contributor

Done in #382

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants