New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spikegenerator precise time patch #327

Merged
merged 9 commits into from May 4, 2016

Conversation

Projects
None yet
4 participants
@jgarridoalcazar
Contributor

jgarridoalcazar commented Apr 29, 2016

The following pyNN code produces an issue when running with NEST backend:

import pyNN.nest as sim
sim.setup(timestep=0.1)
exc_stim = sim.Population(1, sim.SpikeSourceArray, cellparams={'spike_times': [353538.4]})

Internally pyNN defines 'precise_times' property of the spike_generator model to true and the assertion fails when using large times.

By scaling the machine epsilon according to the magnitude of the values being used solves this issue. It is also recommended in the documentation of the epsilon function (http://en.cppreference.com/w/cpp/types/numeric_limits/epsilon).

jgarridoalcazar and others added some commits Dec 10, 2015

Scaling of the computer epsilon to solve pyNN bug
The following pyNN code produces an issue when running with NEST backend:
>> import pyNN.nest as sim
>> sim.setup(spike_precision='on_grid',timestep=0.1)
>> exc_stim = sim.Population(1, sim.SpikeSourceArray, cellparams={'spike_times': [353538.4]})

By scaling the machine epsilon according to the magnitude of the values being used solve this issue. It is also recommended in the documentation of the epsilon function (http://en.cppreference.com/w/cpp/types/numeric_limits/epsilon).
jgarridoalcazar
Improved code format
The format of the modified code has been improved according to the
Travis comments.
jgarridoalcazar
Improved code format (II)
spike_generator.cpp has been adapted by clang-format according to the
developer rules.
@abigailm

This comment has been minimized.

Show comment
Hide comment
@abigailm

abigailm May 2, 2016

Contributor

Is this even a NEST issue, or exclusively a NEST issue? As the default parameter in NEST is set to False, shouldn't PyNN do likewise?

Contributor

abigailm commented May 2, 2016

Is this even a NEST issue, or exclusively a NEST issue? As the default parameter in NEST is set to False, shouldn't PyNN do likewise?

@apdavison

This comment has been minimized.

Show comment
Hide comment
@apdavison

apdavison May 2, 2016

@jgarridoalcazar can you post the error message which appears?

@abigailm it does seem to be a NEST issue; all PyNN is doing here, I think, is calling SetStatus on a built-in NEST model.

@jgarridoalcazar can you post the error message which appears?

@abigailm it does seem to be a NEST issue; all PyNN is doing here, I think, is calling SetStatus on a built-in NEST model.

@abigailm

This comment has been minimized.

Show comment
Hide comment
@abigailm

abigailm May 2, 2016

Contributor

@apdavison yes, but if it is calling SetStatus and setting a parameter that is by default False in NEST to True, without the PyNN user explicitly stating this, then this is a surprising side effect, isn't it?

Contributor

abigailm commented May 2, 2016

@apdavison yes, but if it is calling SetStatus and setting a parameter that is by default False in NEST to True, without the PyNN user explicitly stating this, then this is a surprising side effect, isn't it?

@abigailm

This comment has been minimized.

Show comment
Hide comment
@abigailm

abigailm May 2, 2016

Contributor

@apdavison Obviously assertions failing is not good. So it is also a NEST problem. But I still don't think PyNN should be quietly adjusting default values.

Contributor

abigailm commented May 2, 2016

@apdavison Obviously assertions failing is not good. So it is also a NEST problem. But I still don't think PyNN should be quietly adjusting default values.

@apdavison

This comment has been minimized.

Show comment
Hide comment
@apdavison

apdavison May 2, 2016

@abigailm So you think that PyNN should always use the same default values as NEST? The PyNN default is to always prioritize accuracy / cross-simulator compatibility over performance.

I may well be missing something here, but I don't see that this is a PyNN problem at all. As long as "True" is a valid value for the parameter, any user could set it to this value in PyNEST or sli.

@abigailm So you think that PyNN should always use the same default values as NEST? The PyNN default is to always prioritize accuracy / cross-simulator compatibility over performance.

I may well be missing something here, but I don't see that this is a PyNN problem at all. As long as "True" is a valid value for the parameter, any user could set it to this value in PyNEST or sli.

@jgarridoalcazar

This comment has been minimized.

Show comment
Hide comment
@jgarridoalcazar

jgarridoalcazar May 2, 2016

Contributor

@apdavison The exact error I get when I use that code is the following one:

Assertion failed: (offset >= 0.0), function assert_valid_spike_time_and_insert_, file ../../nest-2.10.0/models/spike_generator.cpp, line 159.

The same issue is also open in pyNN github at (NeuralEnsemble/PyNN#399) with no solution so far.

I can also say that if we try to reproduce the error (using some more or less equivalent) pyNEST code it runs with no issue at all, so I guess that pyNN internally rounds/approximates/calculates a similar value (but with different internal representation).

import nest
import numpy
nest.SetDefaults('spike_generator', {'precise_times': True})
nest.Create('spike_generator', 1, params={'spike_times': numpy.array([ 353538.4])})

But in any case, the value pyNN produces should be valid for NEST even though we cannot reproduce that exact value (or the internal representation) without using pyNN, so I guess the bug is related to NEST.

Contributor

jgarridoalcazar commented May 2, 2016

@apdavison The exact error I get when I use that code is the following one:

Assertion failed: (offset >= 0.0), function assert_valid_spike_time_and_insert_, file ../../nest-2.10.0/models/spike_generator.cpp, line 159.

The same issue is also open in pyNN github at (NeuralEnsemble/PyNN#399) with no solution so far.

I can also say that if we try to reproduce the error (using some more or less equivalent) pyNEST code it runs with no issue at all, so I guess that pyNN internally rounds/approximates/calculates a similar value (but with different internal representation).

import nest
import numpy
nest.SetDefaults('spike_generator', {'precise_times': True})
nest.Create('spike_generator', 1, params={'spike_times': numpy.array([ 353538.4])})

But in any case, the value pyNN produces should be valid for NEST even though we cannot reproduce that exact value (or the internal representation) without using pyNN, so I guess the bug is related to NEST.

@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser May 3, 2016

Contributor

@apdavison @abigailm @jgarridoalcazar I had a look at the code and this is a NEST issue related to how we handle rounding errors. It just never surfaced so far, I suppose, since no-one used such large spike times. I will create a fix shortly.

Contributor

heplesser commented May 3, 2016

@apdavison @abigailm @jgarridoalcazar I had a look at the code and this is a NEST issue related to how we handle rounding errors. It just never surfaced so far, I suppose, since no-one used such large spike times. I will create a fix shortly.

@jgarridoalcazar

This comment has been minimized.

Show comment
Hide comment
@jgarridoalcazar

jgarridoalcazar May 3, 2016

Contributor

Thanks @heplesser. At least for my use case the code of this pull request seems to solve the issue

Contributor

jgarridoalcazar commented May 3, 2016

Thanks @heplesser. At least for my use case the code of this pull request seems to solve the issue

@abigailm

This comment has been minimized.

Show comment
Hide comment
@abigailm

abigailm May 3, 2016

Contributor

@apdavison Unless the PyNN default is to select the precise versions of the neuron models (which have different names), I don't think changing the parameter default to True is enhancing either accuracy or cross-simulator compatibility, as the grid-based versions will ignore any off-grid component of the received spikes. So in this case, it seems that the principle of least surprise would suggest that PyNN should not re-set this default, and instead let users opt in to off-grid computations. But I also may be missing something here. And I re-iterate that this is not as big a deal as NEST generating an error for what should be a valid spike time assignment.

Contributor

abigailm commented May 3, 2016

@apdavison Unless the PyNN default is to select the precise versions of the neuron models (which have different names), I don't think changing the parameter default to True is enhancing either accuracy or cross-simulator compatibility, as the grid-based versions will ignore any off-grid component of the received spikes. So in this case, it seems that the principle of least surprise would suggest that PyNN should not re-set this default, and instead let users opt in to off-grid computations. But I also may be missing something here. And I re-iterate that this is not as big a deal as NEST generating an error for what should be a valid spike time assignment.

@apdavison

This comment has been minimized.

Show comment
Hide comment
@apdavison

apdavison May 3, 2016

@abigailm Sorry, I wasn't clear. PyNN does indeed select the precise versions of the models where these are available.

@abigailm Sorry, I wasn't clear. PyNN does indeed select the precise versions of the models where these are available.

@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser May 3, 2016

Contributor

@jgarridoalcazar I hadn't realized you had filed with as a PR, I thought it was an issue. I have now created a PR towards your PR that adds a regression test. Once you merge my PR, it will appear here and we can start the code review.

In order to trigger the assertion from NEST without PyNN, I had to specify the spike time as 353538.4 0.1 sub to create a number with "bad numerics".

Contributor

heplesser commented May 3, 2016

@jgarridoalcazar I hadn't realized you had filed with as a PR, I thought it was an issue. I have now created a PR towards your PR that adds a regression test. Once you merge my PR, it will appear here and we can start the code review.

In order to trigger the assertion from NEST without PyNN, I had to specify the spike time as 353538.4 0.1 sub to create a number with "bad numerics".

heplesser and others added some commits May 3, 2016

@jgarridoalcazar

This comment has been minimized.

Show comment
Hide comment
@jgarridoalcazar

jgarridoalcazar May 3, 2016

Contributor

@heplesser I have already merged your additions. Thanks!!

Contributor

jgarridoalcazar commented May 3, 2016

@heplesser I have already merged your additions. Thanks!!

@abigailm

This comment has been minimized.

Show comment
Hide comment
@abigailm

abigailm May 3, 2016

Contributor

@apdavison then it completely makes sense for PyNN to set the precise parameter to True by default, and I withdraw all assertions to the contrary :)

Contributor

abigailm commented May 3, 2016

@apdavison then it completely makes sense for PyNN to set the precise parameter to True by default, and I withdraw all assertions to the contrary :)

@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser May 3, 2016

Contributor

👍 from me. If @abigailm is also happy with the fix, we are ready to merge.

Contributor

heplesser commented May 3, 2016

👍 from me. If @abigailm is also happy with the fix, we are ready to merge.

@abigailm

This comment has been minimized.

Show comment
Hide comment
@abigailm

abigailm May 4, 2016

Contributor

yup 👍

Contributor

abigailm commented May 4, 2016

yup 👍

@abigailm abigailm merged commit 74e67b9 into nest:master May 4, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

@jgarridoalcazar jgarridoalcazar deleted the jgarridoalcazar:spikegenerator-precise-time-patch branch May 4, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment