New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose capacitive current in SpatialNeuron #677

Closed
romainbrette opened this Issue Apr 15, 2016 · 12 comments

Comments

Projects
None yet
3 participants
@romainbrette
Member

romainbrette commented Apr 15, 2016

To calculate extracellular fields (LFP), we need to know the capacitive current C*dv/dt. There's an example (lfp.py) that does that using run_regularly to store v at the previous step etc, but it's not elegant. So it would be nicer to have a state variable Ic in the group, standing for the capacitance current (in amp/meter**2), which is updated in the template, when the equation is integrated.

@phreeza

This comment has been minimized.

Show comment
Hide comment
@phreeza

phreeza Mar 23, 2017

Contributor

I would be interested in having this feature. Here are some thoughts:

  • I have tried obtaining the capacitive currents as explained in the LFP example, in a different script. I was able to get it to work after modifying the example slightly, adding a different order parameter to the run_regularly function. However, when I sum up the currents across compartments (ie Im+Icap), they do not sum up to zero, which as far as I understand it should not be possible according to Kirchhoff's circuit laws.

  • I find the nomenclature a bit misleading. The documentation says Im is the total membrane current, when in fact it is only the ionic part, to which the capacitive part must be added to obtain the total current. Maybe this also means that I have misunderstood something.

  • I have looked into implementing the feature directly, but @mstimberg correctly removed the easy tag, as this appears to be no trivial matter. The voltages are updated directly in the spatialstateupdate codegen module, which means that most likely all of the templates would have to be updated to make this possible.

Contributor

phreeza commented Mar 23, 2017

I would be interested in having this feature. Here are some thoughts:

  • I have tried obtaining the capacitive currents as explained in the LFP example, in a different script. I was able to get it to work after modifying the example slightly, adding a different order parameter to the run_regularly function. However, when I sum up the currents across compartments (ie Im+Icap), they do not sum up to zero, which as far as I understand it should not be possible according to Kirchhoff's circuit laws.

  • I find the nomenclature a bit misleading. The documentation says Im is the total membrane current, when in fact it is only the ionic part, to which the capacitive part must be added to obtain the total current. Maybe this also means that I have misunderstood something.

  • I have looked into implementing the feature directly, but @mstimberg correctly removed the easy tag, as this appears to be no trivial matter. The voltages are updated directly in the spatialstateupdate codegen module, which means that most likely all of the templates would have to be updated to make this possible.

@romainbrette

This comment has been minimized.

Show comment
Hide comment
@romainbrette

romainbrette Mar 23, 2017

Member
Member

romainbrette commented Mar 23, 2017

@mstimberg

This comment has been minimized.

Show comment
Hide comment
@mstimberg

mstimberg Mar 23, 2017

Member

I have tried obtaining the capacitive currents as explained in the LFP example, in a different script. I was able to get it to work after modifying the example slightly, adding a different order parameter to the run_regularly function.

Hmm, I'm surprised that this makes a difference. In the example, the run_regularly operation is performed in the start slot and the summed variable is updated in after_groups -- the order parameter only affects operations happening within the same slot, so I don't see how it should matter. Either way, I think we all agree that juggling with when and order is not what a user should be forced to do for this use case.

However, when I sum up the currents across compartments (ie Im+Icap), they do not sum up to zero, which as far as I understand it should not be possible according to Kirchhoff's circuit laws.

Are you sure this is not just a numerical artefact? I did a quick check with the LFP example, and the two currents (summed across compartments) look identical, with small deviations during the spike. Using a smaller time step makes the deviations go away (or at least get very small).

I find the nomenclature a bit misleading. The documentation says Im is the total membrane current, when in fact it is only the ionic part, to which the capacitive part must be added to obtain the total current. Maybe this also means that I have misunderstood something.

We can certainly make this clearer in the documentation, but I think it is standard nomenclature, e.g. the Dayan&Abbot book says: "The total current flowing across the membrane through all of its ion channels is called the membrane current of the neuron." Actually, while the user normally only defines the ionic currents as part of Im, all electrode currents (using the point current mechanism) will also end up in Im, so we should be careful here.

I have looked into implementing the feature directly, but @mstimberg correctly removed the easy tag, as this appears to be no trivial matter. The voltages are updated directly in the spatialstateupdate codegen module, which means that most likely all of the templates would have to be updated to make this possible.

I had another look and while this is certainly not trivial for someone not familiar with details of Brian's code generation mechanism, it should not be that hard either. We need a new variable, say Ic in the equations (similar to the way we add volume, area, and other constants) and then indeed change all the spatialstateupdate templates. The necessary change is relative straightforward though:

  1. Add Ic to the list of variables used in the template (at the top in the USES_VARIABLES block)
  2. Before the last block of code in the example (the block that updates the value of {{v}}), copy of the current value of {{v}}, i.e. for numpy something like _v_previous = {{v}}.copy().
  3. After the update, calculate the difference between the new and the old values, and update Ic accordingly. For numpy: {{Ic}}[:] = {{Cm}}*({{v}} - _v_previous)/{{dt}}

Note that in the above, all names except _v_previous are template variables, e.g. {{v}} will be replaced by something like _array_neurongroup_v in the final code. For _v_previous, this is not necessary, since it is a temporary variable that is not stored.

If you are interested in working on this, please go ahead! Try it first for one single code generation target (numpy should be the easiest), converting the same approach to Cython and C++ should then be straightforward.

Member

mstimberg commented Mar 23, 2017

I have tried obtaining the capacitive currents as explained in the LFP example, in a different script. I was able to get it to work after modifying the example slightly, adding a different order parameter to the run_regularly function.

Hmm, I'm surprised that this makes a difference. In the example, the run_regularly operation is performed in the start slot and the summed variable is updated in after_groups -- the order parameter only affects operations happening within the same slot, so I don't see how it should matter. Either way, I think we all agree that juggling with when and order is not what a user should be forced to do for this use case.

However, when I sum up the currents across compartments (ie Im+Icap), they do not sum up to zero, which as far as I understand it should not be possible according to Kirchhoff's circuit laws.

Are you sure this is not just a numerical artefact? I did a quick check with the LFP example, and the two currents (summed across compartments) look identical, with small deviations during the spike. Using a smaller time step makes the deviations go away (or at least get very small).

I find the nomenclature a bit misleading. The documentation says Im is the total membrane current, when in fact it is only the ionic part, to which the capacitive part must be added to obtain the total current. Maybe this also means that I have misunderstood something.

We can certainly make this clearer in the documentation, but I think it is standard nomenclature, e.g. the Dayan&Abbot book says: "The total current flowing across the membrane through all of its ion channels is called the membrane current of the neuron." Actually, while the user normally only defines the ionic currents as part of Im, all electrode currents (using the point current mechanism) will also end up in Im, so we should be careful here.

I have looked into implementing the feature directly, but @mstimberg correctly removed the easy tag, as this appears to be no trivial matter. The voltages are updated directly in the spatialstateupdate codegen module, which means that most likely all of the templates would have to be updated to make this possible.

I had another look and while this is certainly not trivial for someone not familiar with details of Brian's code generation mechanism, it should not be that hard either. We need a new variable, say Ic in the equations (similar to the way we add volume, area, and other constants) and then indeed change all the spatialstateupdate templates. The necessary change is relative straightforward though:

  1. Add Ic to the list of variables used in the template (at the top in the USES_VARIABLES block)
  2. Before the last block of code in the example (the block that updates the value of {{v}}), copy of the current value of {{v}}, i.e. for numpy something like _v_previous = {{v}}.copy().
  3. After the update, calculate the difference between the new and the old values, and update Ic accordingly. For numpy: {{Ic}}[:] = {{Cm}}*({{v}} - _v_previous)/{{dt}}

Note that in the above, all names except _v_previous are template variables, e.g. {{v}} will be replaced by something like _array_neurongroup_v in the final code. For _v_previous, this is not necessary, since it is a temporary variable that is not stored.

If you are interested in working on this, please go ahead! Try it first for one single code generation target (numpy should be the easiest), converting the same approach to Cython and C++ should then be straightforward.

@phreeza

This comment has been minimized.

Show comment
Hide comment
@phreeza

phreeza Mar 23, 2017

Contributor

Are you sure this is not just a numerical artefact? I did a quick check with the LFP example, and the two currents (summed across compartments) look identical, with small deviations during the spike. Using a smaller time step makes the deviations go away (or at least get very small).

I do think it is a numerical artefact, it decreases when I choose a smaller timestep. However, it does not go away to a sufficient degree with a reasonable timestep. It is actually quite important to get this right when calculating LFPs, because this sort of error leads to monopoles which decay in space as 1/r and can quickly overwhelm the "true" currents which are always dipoles or higher and decay as 1/r^2 or faster.

For this reason I would prefer to figure out how to make the current balance work out correctly, which means I have to understand the numerical integration scheme used in the spatialstateupdate, and hopefully construct the correct total membrane current from the intermediate variables.

My rough guess is that the method in spatialstateupdate is something like a Crank-Nicholson update, while the Ic = Cm*(v - _v_previous)/dt trick is implicitly based on an Euler update, and that is where the mismatch comes from. Does that sound plausible?

Is the variable naming taken from a particular more human-readable derivation of the scheme which I could read and try to figure out?

Contributor

phreeza commented Mar 23, 2017

Are you sure this is not just a numerical artefact? I did a quick check with the LFP example, and the two currents (summed across compartments) look identical, with small deviations during the spike. Using a smaller time step makes the deviations go away (or at least get very small).

I do think it is a numerical artefact, it decreases when I choose a smaller timestep. However, it does not go away to a sufficient degree with a reasonable timestep. It is actually quite important to get this right when calculating LFPs, because this sort of error leads to monopoles which decay in space as 1/r and can quickly overwhelm the "true" currents which are always dipoles or higher and decay as 1/r^2 or faster.

For this reason I would prefer to figure out how to make the current balance work out correctly, which means I have to understand the numerical integration scheme used in the spatialstateupdate, and hopefully construct the correct total membrane current from the intermediate variables.

My rough guess is that the method in spatialstateupdate is something like a Crank-Nicholson update, while the Ic = Cm*(v - _v_previous)/dt trick is implicitly based on an Euler update, and that is where the mismatch comes from. Does that sound plausible?

Is the variable naming taken from a particular more human-readable derivation of the scheme which I could read and try to figure out?

@phreeza

This comment has been minimized.

Show comment
Hide comment
@phreeza

phreeza Mar 23, 2017

Contributor

Hmm, I'm surprised that this makes a difference. In the example, the run_regularly operation is performed in the start slot and the summed variable is updated in after_groups -- the order parameter only affects operations happening within the same slot, so I don't see how it should matter. Either way, I think we all agree that juggling with when and order is not what a user should be forced to do for this use case.

I am calculating a new variable I_cap = Cm*(v-previous_v)/dt right in the model equations and reading that with a StateMonitor. Whithout the order parameter, v and previous_v were always equal, and the current was calculated as 0.

Contributor

phreeza commented Mar 23, 2017

Hmm, I'm surprised that this makes a difference. In the example, the run_regularly operation is performed in the start slot and the summed variable is updated in after_groups -- the order parameter only affects operations happening within the same slot, so I don't see how it should matter. Either way, I think we all agree that juggling with when and order is not what a user should be forced to do for this use case.

I am calculating a new variable I_cap = Cm*(v-previous_v)/dt right in the model equations and reading that with a StateMonitor. Whithout the order parameter, v and previous_v were always equal, and the current was calculated as 0.

@phreeza

This comment has been minimized.

Show comment
Hide comment
@phreeza

phreeza Mar 24, 2017

Contributor

To make sure that the issue does not come from the different scheduling from the one in the example, I have also implemented the version with the modified codegen template as you suggested. The problem is exactly the same. Here is an example of what happens. This is actually done with a single compartment, just to narrow it down.
figure_1

Contributor

phreeza commented Mar 24, 2017

To make sure that the issue does not come from the different scheduling from the one in the example, I have also implemented the version with the modified codegen template as you suggested. The problem is exactly the same. Here is an example of what happens. This is actually done with a single compartment, just to narrow it down.
figure_1

@mstimberg

This comment has been minimized.

Show comment
Hide comment
@mstimberg

mstimberg Mar 24, 2017

Member

I have to think about all this (@romainbrette is the expert on these topics)

For clarification: in your example that produced the figure, did you use the SpatialNeuron class with a single compartment or did you use a simple NeuronGroup? Could you maybe share the source code? In your plot, it looks as if there's a shift (of one time step?) between the capacitive and the ionic current, maybe it is still a scheduling issue?

I am calculating a new variable I_cap = Cm*(v-previous_v)/dt right in the model equations and reading that with a StateMonitor. Whithout the order parameter, v and previous_v were always equal, and the current was calculated as 0.

I see, that explains it. If you define a subexpression like I_cap in your equations, then it will be evaluated whenever it is used. The StateMonitor records per default in the start slot, i.e. in the same slot as the run_regularly operation. You could also define it with the constant over dt flag, it would then be updated only once per time step, in the before_start slot (which means that the best place for the run_regularly operation would probably be end).

Is the variable naming taken from a particular more human-readable derivation of the scheme which I could read and try to figure out?

Unfortunately we do not have a nice and clear documentation of this at this point, only a bunch of notes and "internal documents". Our approach basically follows two papers by Mascagni (i.e., backward Euler and domain decomposition):
http://oadoi.org/10.1137/0727054
http://oadoi.org/10.1016/0165-0270(91)90143-N

Member

mstimberg commented Mar 24, 2017

I have to think about all this (@romainbrette is the expert on these topics)

For clarification: in your example that produced the figure, did you use the SpatialNeuron class with a single compartment or did you use a simple NeuronGroup? Could you maybe share the source code? In your plot, it looks as if there's a shift (of one time step?) between the capacitive and the ionic current, maybe it is still a scheduling issue?

I am calculating a new variable I_cap = Cm*(v-previous_v)/dt right in the model equations and reading that with a StateMonitor. Whithout the order parameter, v and previous_v were always equal, and the current was calculated as 0.

I see, that explains it. If you define a subexpression like I_cap in your equations, then it will be evaluated whenever it is used. The StateMonitor records per default in the start slot, i.e. in the same slot as the run_regularly operation. You could also define it with the constant over dt flag, it would then be updated only once per time step, in the before_start slot (which means that the best place for the run_regularly operation would probably be end).

Is the variable naming taken from a particular more human-readable derivation of the scheme which I could read and try to figure out?

Unfortunately we do not have a nice and clear documentation of this at this point, only a bunch of notes and "internal documents". Our approach basically follows two papers by Mascagni (i.e., backward Euler and domain decomposition):
http://oadoi.org/10.1137/0727054
http://oadoi.org/10.1016/0165-0270(91)90143-N

@phreeza

This comment has been minimized.

Show comment
Hide comment
@phreeza

phreeza Mar 24, 2017

Contributor

Could you maybe share the source code? In your plot, it looks as if there's a shift (of one time step?) between the capacitive and the ionic current, maybe it is still a scheduling issue?

Here is a simplified example that shows the same behaviour. I also compared the output of this version to using the modified spatialstateupdate, and it is exactly the same. So I think it is not a scheduling issue.

def test_runregularly_trick(self):
    from brian2.units import cm, um, msiemens, mV, siemens, ohm, uF, uA, ms
    from brian2 import Cylinder, SpatialNeuron, Equations, defaultclock, StateMonitor, Network
    from matplotlib import pyplot as plt

    morpho = Cylinder(x=[0, 10] * cm, diameter=2 * 238 * um, n=1, type='axon')

    defaultclock.dt = 0.1*ms
    # Only leak
    eqs = '''
    Im = gl * (El-v): amp/meter**2
    I : amp (point current) # applied current
    v_previous : volt
    I_C = Cm*(v-v_previous)/dt : amp/meter**2
    I_total = I_C - Im : amp/meter**2
    '''
    model = Equations(eqs,
                      El=10.613 * mV,
                      gl=0.3 * msiemens / cm ** 2,
                      )
    neuron = SpatialNeuron(morphology=morpho, model=model, Cm=1 * uF / cm ** 2,
                           Ri=35.4 * ohm * cm, method="exponential_euler")
    neuron.run_regularly('v_previous = v', when='start',order=1)

    neuron.v = 0.0 * mV
    neuron.I = 0

    mon = StateMonitor(neuron, ['Im','I_C','I_total','v','v_previous'], record=True)
    network = Network(neuron,mon)
    network.run(50 * ms)
    neuron.I[0] = 1 * uA  # current injection at one end
    network.run(3 * ms)
    neuron.I = 0 * uA
    network.run(100 * ms)

    plt.plot(mon.t,mon.Im.sum(axis=0),label='ionic')
    plt.plot(mon.t,mon.I_C.sum(axis=0),label='capacitive')
    plt.plot(mon.t,mon.I_total.sum(axis=0),label='total')
    plt.legend()
    plt.title('previous_v trick')
    plt.show()

I still don't quite understand how the full update is calculated. The Mascagni paper mentions iterating on calculating V and n,m,n until the solution converges for each timestep. Is this done here, or is it just a single pass? Anyway, in the example above there are no gating variables to update so even a single pass should be precise. Could it be that the Im that is exported is a different one than the one is implied in the spatialstateupdate?

Contributor

phreeza commented Mar 24, 2017

Could you maybe share the source code? In your plot, it looks as if there's a shift (of one time step?) between the capacitive and the ionic current, maybe it is still a scheduling issue?

Here is a simplified example that shows the same behaviour. I also compared the output of this version to using the modified spatialstateupdate, and it is exactly the same. So I think it is not a scheduling issue.

def test_runregularly_trick(self):
    from brian2.units import cm, um, msiemens, mV, siemens, ohm, uF, uA, ms
    from brian2 import Cylinder, SpatialNeuron, Equations, defaultclock, StateMonitor, Network
    from matplotlib import pyplot as plt

    morpho = Cylinder(x=[0, 10] * cm, diameter=2 * 238 * um, n=1, type='axon')

    defaultclock.dt = 0.1*ms
    # Only leak
    eqs = '''
    Im = gl * (El-v): amp/meter**2
    I : amp (point current) # applied current
    v_previous : volt
    I_C = Cm*(v-v_previous)/dt : amp/meter**2
    I_total = I_C - Im : amp/meter**2
    '''
    model = Equations(eqs,
                      El=10.613 * mV,
                      gl=0.3 * msiemens / cm ** 2,
                      )
    neuron = SpatialNeuron(morphology=morpho, model=model, Cm=1 * uF / cm ** 2,
                           Ri=35.4 * ohm * cm, method="exponential_euler")
    neuron.run_regularly('v_previous = v', when='start',order=1)

    neuron.v = 0.0 * mV
    neuron.I = 0

    mon = StateMonitor(neuron, ['Im','I_C','I_total','v','v_previous'], record=True)
    network = Network(neuron,mon)
    network.run(50 * ms)
    neuron.I[0] = 1 * uA  # current injection at one end
    network.run(3 * ms)
    neuron.I = 0 * uA
    network.run(100 * ms)

    plt.plot(mon.t,mon.Im.sum(axis=0),label='ionic')
    plt.plot(mon.t,mon.I_C.sum(axis=0),label='capacitive')
    plt.plot(mon.t,mon.I_total.sum(axis=0),label='total')
    plt.legend()
    plt.title('previous_v trick')
    plt.show()

I still don't quite understand how the full update is calculated. The Mascagni paper mentions iterating on calculating V and n,m,n until the solution converges for each timestep. Is this done here, or is it just a single pass? Anyway, in the example above there are no gating variables to update so even a single pass should be precise. Could it be that the Im that is exported is a different one than the one is implied in the spatialstateupdate?

@phreeza

This comment has been minimized.

Show comment
Hide comment
@phreeza

phreeza Mar 24, 2017

Contributor

I have now managed to get the above example to work by telling the StateMonitor to record at end. However, when I reintroduce the gating variables, there are still discrepancies.

Contributor

phreeza commented Mar 24, 2017

I have now managed to get the above example to work by telling the StateMonitor to record at end. However, when I reintroduce the gating variables, there are still discrepancies.

@mstimberg

This comment has been minimized.

Show comment
Hide comment
@mstimberg

mstimberg Mar 24, 2017

Member

I still don't quite understand how the full update is calculated. The Mascagni paper mentions iterating on calculating V and n,m,n until the solution converges for each timestep. Is this done here, or is it just a single pass?

Sorry, we don't actually use backward Euler for the gating variables. We do a staggered update, where we update first the gating variables (normally using exponential Euler) and then the membrane potential (this is based on backwards Euler, but it has an exact solution, no iteration).

I have now managed to get the above example to work by telling the StateMonitor to record at end. However, when I reintroduce the gating variables, there are still discrepancies.

Right, this is subtle and I am not 100% we are not doing something slightly incorrect in Brian here 🤔 . The problem is that the Im that you are comparing to I_C, is not exactly the Im that was used to update v: by default, during a time step we first update v, and then the gating variables. If you record Im and I_C at the end of the time step, you will get an I_C that is based on the previous time step's gating variables (because that's what was used to update v), but Im based on the updated gating variables. This difference will normally be small, but not when gating variables change quickly. Now you could record Im after the update of v and before the update of the gating variables. But I wonder whether we don't get the order wrong and should rather update first the gating variables and then v, this seems to be more intuitive. You can try it out by changing the schedule in your code:

neuron.diffusion_state_updater.order = 1

If I do this in your example (with added active channels, and with 1 or more compartments), then the Im and I_C seem to match pretty well.

Member

mstimberg commented Mar 24, 2017

I still don't quite understand how the full update is calculated. The Mascagni paper mentions iterating on calculating V and n,m,n until the solution converges for each timestep. Is this done here, or is it just a single pass?

Sorry, we don't actually use backward Euler for the gating variables. We do a staggered update, where we update first the gating variables (normally using exponential Euler) and then the membrane potential (this is based on backwards Euler, but it has an exact solution, no iteration).

I have now managed to get the above example to work by telling the StateMonitor to record at end. However, when I reintroduce the gating variables, there are still discrepancies.

Right, this is subtle and I am not 100% we are not doing something slightly incorrect in Brian here 🤔 . The problem is that the Im that you are comparing to I_C, is not exactly the Im that was used to update v: by default, during a time step we first update v, and then the gating variables. If you record Im and I_C at the end of the time step, you will get an I_C that is based on the previous time step's gating variables (because that's what was used to update v), but Im based on the updated gating variables. This difference will normally be small, but not when gating variables change quickly. Now you could record Im after the update of v and before the update of the gating variables. But I wonder whether we don't get the order wrong and should rather update first the gating variables and then v, this seems to be more intuitive. You can try it out by changing the schedule in your code:

neuron.diffusion_state_updater.order = 1

If I do this in your example (with added active channels, and with 1 or more compartments), then the Im and I_C seem to match pretty well.

@phreeza

This comment has been minimized.

Show comment
Hide comment
@phreeza

phreeza Mar 25, 2017

Contributor

neuron.diffusion_state_updater.order = 1
If I do this in your example (with added active channels, and with 1 or more compartments), then the Im and I_C seem to match pretty well.

That does seem to work perfectly! Thanks. So to recap the steps that were necessary were:

  1. Add a variable Ic = Cm*(v-v_previous)/dt to the model. (Either explicitly or in spatialstateupdate)
  2. Update the variable v_previous before v gets updated. (Either by setting up a run_regularly or modifying spatialstateupdate)
  3. Make v be updated after gating variables to ensure Im is calculated consistently with Ic. This is achieved by setting neuron.diffusion_state_updater.order = 1.
  4. Make Im and Ic be recorded after these steps, by setting when='end' in the StateMonitor.

I will send a PR with my modified spatialstateupdate that exports an Ic. However I wonder if it is prudent to provide this current, if steps 3 and 4 will still have to be done by anyone wanting to use it and have 'self-consistent' currents Im and Ic which sum to 0. What do you think?

Contributor

phreeza commented Mar 25, 2017

neuron.diffusion_state_updater.order = 1
If I do this in your example (with added active channels, and with 1 or more compartments), then the Im and I_C seem to match pretty well.

That does seem to work perfectly! Thanks. So to recap the steps that were necessary were:

  1. Add a variable Ic = Cm*(v-v_previous)/dt to the model. (Either explicitly or in spatialstateupdate)
  2. Update the variable v_previous before v gets updated. (Either by setting up a run_regularly or modifying spatialstateupdate)
  3. Make v be updated after gating variables to ensure Im is calculated consistently with Ic. This is achieved by setting neuron.diffusion_state_updater.order = 1.
  4. Make Im and Ic be recorded after these steps, by setting when='end' in the StateMonitor.

I will send a PR with my modified spatialstateupdate that exports an Ic. However I wonder if it is prudent to provide this current, if steps 3 and 4 will still have to be done by anyone wanting to use it and have 'self-consistent' currents Im and Ic which sum to 0. What do you think?

@mstimberg

This comment has been minimized.

Show comment
Hide comment
@mstimberg

mstimberg Mar 27, 2017

Member

I will send a PR with my modified spatialstateupdate that exports an Ic. However I wonder if it is prudent to provide this current, if steps 3 and 4 will still have to be done by anyone wanting to use it and have 'self-consistent' currents Im and Ic which sum to 0. What do you think?

Step 4 will not be necessary when Ic is set in spatialstateupdate, recording in start or end will only make the difference between the values from before or after the update that occurs during the time step -- this is true for all variables, so not a problem.

I do still wonder whether the opposite order of steps, i.e. the one you get when setting neuron.diffusion_state_updater.order = 1 does not make more sense in general, anyway. I'll have a closer look into this. If this is the case, then we could just change the order, and you wouldn't have to do anything extra to get the consistent Ic and Im results.

Member

mstimberg commented Mar 27, 2017

I will send a PR with my modified spatialstateupdate that exports an Ic. However I wonder if it is prudent to provide this current, if steps 3 and 4 will still have to be done by anyone wanting to use it and have 'self-consistent' currents Im and Ic which sum to 0. What do you think?

Step 4 will not be necessary when Ic is set in spatialstateupdate, recording in start or end will only make the difference between the values from before or after the update that occurs during the time step -- this is true for all variables, so not a problem.

I do still wonder whether the opposite order of steps, i.e. the one you get when setting neuron.diffusion_state_updater.order = 1 does not make more sense in general, anyway. I'll have a closer look into this. If this is the case, then we could just change the order, and you wouldn't have to do anything extra to get the consistent Ic and Im results.

@mstimberg mstimberg closed this in #828 Apr 1, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment