Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion of possible types for buffers #368

Closed
PTraeder opened this issue Jun 30, 2017 · 18 comments
Closed

Discussion of possible types for buffers #368

PTraeder opened this issue Jun 30, 2017 · 18 comments

Comments

@PTraeder
Copy link
Contributor

PTraeder commented Jun 30, 2017

We are planning to add typing to buffers.
Current buffers are intended to carry the type pA.
Concerning spike buffers, we would like to have feedback for types you consider probable/useful.
For example, would you like to see just a specific subset of units? All of the "native" NEST units?
Compound units such as mV/ms? Primitive types?

Additionally, your ideas about possible syntax for these types would be welcome.

Currently buffers are defined thusly:

bufferName <- (inhibitory | exitatory)* (spike | current) 

My proposition, in accordance with suggestions made by @heplesser, would be to simply add the respective type after the buffer name. Similarly to variable definitions.

someBuffer1 mV  <- exitatory spike 
someBuffer2 nS  <- exitatory spike 

for a buffer of type ms.

@PTraeder
Copy link
Contributor Author

@heplesser
Copy link

I assume that the buffers you mention here are the RingBuffers for spikes and currents in the models.

  1. Exactly what these buffers store depends on the model, so the user needs to be able to choose the unit freely.
  2. For currents, pA is the most likely unit; I cannot immediately think of another unit that would be reasonable here.
  3. For spikes, it depends a bit on how NESTML interprets these buffers.
    1. If they are interpreted as pure spike storage, then they contain spike counts in bins of a given width, the simulation resolution, and the buffers would be unitless.
    2. If they are interpreted as the buffers are implemented in NEST, i.e., a summed input (at each point in time) weighted with synaptic weights, then the buffers have the same unit as the synaptic weights.
    3. The unit of the synaptic weight is determined by the model:
      • For _psc_ models, it is pA
      • Except for _psc_delta models, where it is mV
      • For _cond_ models it is nS
      • Other units may apply to other models, especially binary neurons.

@Silmathoron
Copy link
Member

Silmathoron commented Jul 1, 2017

Just in case, I'm also referencing the post that summarizes a long discussion on how we might simplify the interaction between spikes and state variables, just so that the unit discussion does take into account the way the buffer might be used.

Because of this, I would be in favour of dimensionless buffers where the unit is added by the convolution function depending on the target variable that will be affected (this includes the possible existence of a normalization factor.

@DimitriPlotnikov
Copy link
Contributor

DimitriPlotnikov commented Jul 11, 2017

@heplesser NESTML is modeling language for nest and, therefore, I would appreciate nest interpretation. Current buffers, are currently fixed to be pA. The idea is to let the modeler to state the unit on the buffer. E.g. for the iaf_psc_alpha

neuron iaf_cond_alpha_nestml:
  equations:
    # g_in, g_ex model an alpha shape
    function I_syn_exc pA = convolve(g_ex, spikeExc) * ( V_m - E_ex )
    function I_syn_inh pA = convolve(g_in, spikeInh) * ( V_m - E_in )
  end
  input:
    spikeInh  nS <- inhibitory spike
    spikeExc  nS <- excitatory spike
  end
end

@Silmathoron: I contributed in #330

@Silmathoron
Copy link
Member

@DimitriPlotnikov why are your spikes buffers in nS in the example?

@DimitriPlotnikov
Copy link
Contributor

@Silmathoron it is a cond-model. See iii) in @heplesser's comment.

@Silmathoron
Copy link
Member

Yeah, but given the method you propose for the synaptic current (using convolve with a shape which should possess a dimension -- cf. #330), I think the spike buffer should be dimensionless, corresponding to HEP's i)

I think any other choice would lead to non-general solutions where we would have to deal with special cases.

Therefore, I propose spike buffers as pure spike storage which are coupled to the other variables through the convolve function which is in charge of normalizing them and giving the right unit.

@DimitriPlotnikov
Copy link
Contributor

DimitriPlotnikov commented Jul 12, 2017

@Silmathoron What do you mean with normalization factors? Initial values?
@heplesser what is your opinion?

@jougs
Copy link
Contributor

jougs commented Jul 12, 2017

We had a discussion about this internally on Tuesday and came to the conclusion that actually the shape should be dimensionless and the buffer should have the unit. There are a number of reasons why we think this is the right way to go:

  • Spike buffers are in fact not just a plain storage for pure spikes but contain the number of spikes scaled with their weights. And the weights have a unit that depends on the model (cf. Discussion of possible types for buffers #368 (comment) 3.ii+iii).
  • Unitless shapes are also important if we want to re-use shapes. If they had units, we could not re-use the same shape definition in a conductance- and current-based synapse due to mismatching units.
  • For auto-generating the documentation, we want to extract the unit of the weight for the different synapses a neuron can have. For this to work, the buffer declaration requires a unit.

@Silmathoron: I think that this is the most general solution we could have. If we did it the other way around, there would be no straight-forward way for the convolve function to know the right unit to add in the context it is used.

@Silmathoron
Copy link
Member

Silmathoron commented Jul 12, 2017

@DimitriPlotnikov normalization for alpha shapes, for example (so that the synaptic weight is actually the maximum value of the PSC).

@jougs You mean that the buffer gathers the weighted inputs from different synapses that arrive on a single neuron, right? But the synaptic weight is just a factor and is interpreted in terms of a dimensioned PSC only once the model is known, so I'm not sure I understand why this would be a problem.
I do understand the potential issue for using the shapes across models, though, even if I think we should be able to find a solution to that...

And I didn't get the last auto-documentation part :s

I must say I'm still unsure about having a same object (a spike buffer) get different units depending on the model... are you really sure there is no other way?
Could we organize a small meeting to discuss this in details? It's a little complicated to explain everything on Git, so for now I don't really understand what the problems you mentioned are...

So for example how would you handle the dimensions in the model @DimitriPlotnikov started to write in #330?

@DimitriPlotnikov
Copy link
Contributor

DimitriPlotnikov commented Jul 17, 2017

Proposal for the discussion:
The first pair shows the introductions of the types on the buffer and the introduction of the initial_values blocks that captures variables which are modeled as an ODE

neuron iaf_psc_alpha_old:
  state:
    V_m mV
  end

  equations:
    shape I_shape_in = (e/tau_syn_in) * t * exp(-1/tau_syn_in*t) # shape is "unitless"
    shape I_shape_ex = (e/tau_syn_ex) * t * exp(-1/tau_syn_ex*t)
    function I pA = curr_sum(I_shape_in, in_spikes) + curr_sum(I_shape_ex, ex_spikes)
    V_m' = -1/Tau * (V_m - E_L) + 1/C_m * I
  end

  input:
    ex_spikes   <- excitatory spike
    in_spikes   <- inhibitory spike
    currents    <- current
  end

  output: spike

  update:
    # spikes are applied automatically. computed initial values are used.
  end
end
neuron iaf_psc_alpha_new:
  initial_values:
    V_m = 0mV # <- CHANGE
  end

  equations:
    shape I_shape_in = (e/tau_syn_in) * t * exp(-1/tau_syn_in*t) # shape is "unitless"
    shape I_shape_ex = (e/tau_syn_ex) * t * exp(-1/tau_syn_ex*t)
    function I pA = convolve(I_shape_in, in_spikes) + convolve(I_shape_ex, ex_spikes) # <- CHANGE
    V_m' = -1/Tau * (V_m - E_L))+ 1/C_m * I
  end

  input:
    ex_spikes pA  <- excitatory spike # <- CHANGE
    in_spikes pA  <- inhibitory spike # <- CHANGE
    currents    <- current
  end

  output: spike

  update:
    # spikes are applied automatically. computed initial values are used.
  end
end

New syntax for the shapes as ODEs

neuron iaf_cond_alpha_implicit_old:

  state:
    V_m mV = E_L     ## membrane potential

    g_in nS = 0nS    ## inputs from the inh conductance
    g_ex nS = 0nS    ## inputs from the exc conductance

    g_in' nS/ms = 0    ## inputs from the inh conductance
    g_ex' nS/ms = 0    ## inputs from the exc conductance
  end

  equations:
    g_in'' = (-1)/(tau_syn_in)**(2)*g_in+(-2)/tau_syn_in*g_in'
    g_in' = g_in'
    g_ex'' = (-1)/(tau_syn_ex)**(2)*g_ex+(-2)/tau_syn_ex*g_ex'
    g_ex' = g_ex'

    function I_syn_exc pA = cond_sum(g_ex, spikeExc) * ( V_m - E_ex )
    function I_syn_inh pA = cond_sum(g_in, spikeInh) * ( V_m - E_in )
    function I_leak pA = g_L * ( V_m - E_L )

    V_m' = ( -I_leak - I_syn_exc - I_syn_inh + currents + I_e ) / C_m
  end

  internals:
    PSConInit_E nS/ms = nS * e / tau_syn_ex
    PSConInit_I nS/ms = nS * e / tau_syn_in
  end

  input:
    spikeInh   <- inhibitory spike
    spikeExc   <- excitatory spike
    currents <- current
  end

  output: spike

  update:
    # ...
    g_ex' += spikeExc * PSConInit_E
    g_in' += spikeInh * PSConInit_I
  end
end
neuron iaf_cond_alpha_implicit_new:

  initial_values:  # <- CHANGE
    V_m mV = E_L     
    g_in real = 0nS # <- CHANGE   
    g_ex real = 0nS # <- CHANGE   
    g_in' 1/ms = e / tau_syn_in # <- CHANGE 
    g_ex' 1/ms = e / tau_syn_ex  # <- CHANGE
  end

  equations:
    shape g_in'' = (-1)/(tau_syn_in)**(2)*g_in+(-2)/tau_syn_in*g_in' # <- CHANGE
    shape g_in' = g_in'                                              # <- CHANGE
    shape g_ex'' = (-1)/(tau_syn_ex)**(2)*g_ex+(-2)/tau_syn_ex*g_ex' # <- CHANGE
    shape g_ex' = g_ex'                                              # <- CHANGE

    function I_syn_exc pA = convolve(g_ex, spikeExc) * ( V_m - E_ex )
    function I_syn_inh pA = convolve(g_in, spikeInh) * ( V_m - E_in )
    function I_leak pA = g_L * ( V_m - E_L )

    V_m' = ( -I_leak - I_syn_exc - I_syn_inh + currents + I_e ) / C_m
  end

  input:
    spikeInh  nS <- inhibitory spike
    spikeExc  nS <- excitatory spike
    currents <- current
  end

  output: spike

  update:
    # variables responsible for modeling of `convolve(g_ex, spikeExc)`
    # are updated automatically here 
  end
end

@DimitriPlotnikov
Copy link
Contributor

@PTraeder Any progress?

@PTraeder
Copy link
Contributor Author

PTraeder commented Jul 27, 2017

It is going rather slower than I expected. I have to fight the type system at every step.

And sorry for the long response time, the notification was burried in MC spam.

@PTraeder
Copy link
Contributor Author

PTraeder commented Jul 30, 2017

The problems I ran into stemmed from me wanting to adapt the type system to still make a distinction between BUFFER, PRIMITIVE and UNIT TypeSymbol kinds. This proved to be painful since, among other problems, the type system is often queried with nothing but the name of the type resulting in an ambivalence between a type such as ("integer",PRIMITIVE) and ("integer",BUFFER).
I have since reworked my approach to completely get rid of the BUFFER Kind for TypeSymbol objects.
Since there is one instance where we have to use the TypeSymbol to infer wether it denotes a buffer or a different type (namely when generating NEST code), the TypeSymbol class has been extended with a boolean field denoting its origin in a buffer declaration, if applicable.

@DimitriPlotnikov
Copy link
Contributor

Sounds reasonable.

@heplesser
Copy link

I support @jougs suggestion above and find the �_new examples by @DimitriPlotnikov much cleaner than the _old ones.

Just one question: I am a bit confused by g_ex' = g_ex' and similar as both sides of the equation are identical.

@DimitriPlotnikov
Copy link
Contributor

@heplesser
g_ex' = g_ex' are needed for the integration with gsl. However, these equations can be inferred automatically and will be inferred in the future.

DimitriPlotnikov pushed a commit that referenced this issue Oct 4, 2017
Introduces Initial values block (cf. #368 for the discussion)
DimitriPlotnikov added a commit that referenced this issue Oct 6, 2017
Intoruced typed buffers as discussed in #368
@DimitriPlotnikov
Copy link
Contributor

Done in #385.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants