New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid divisions/calulcation of steps #685

Merged
merged 6 commits into from Jun 14, 2017

Conversation

Projects
None yet
4 participants
@jakobj
Contributor

jakobj commented Mar 20, 2017

This PR implements the changes suggested by Harald Servat to reduce run time by avoiding divisions and adding additional member in Event and Time objects that store frequently used values instead of calculating them from the other members on every invocation. Preliminary benchmarks show an increase of ~15% on JUQUEEN. Plots will follow.
I suggest @heplesser @apeyser @terhorstd as reviewers. If Harald Servat could also have a look that would be great, unfortunately I don't know his GitHub handle.

@apeyser

This comment has been minimized.

Show comment
Hide comment
@apeyser

apeyser Mar 20, 2017

Contributor

Please see my poster from two years ago with @wschenck on why you shouldn't cache values like this.

On the other hand, inverting constants or statics and using them as multiplicative values is good -- as long as you aren't pushing the envelope of the floating point ranges and losing precision.

Contributor

apeyser commented Mar 20, 2017

Please see my poster from two years ago with @wschenck on why you shouldn't cache values like this.

On the other hand, inverting constants or statics and using them as multiplicative values is good -- as long as you aren't pushing the envelope of the floating point ranges and losing precision.

@jakobj

This comment has been minimized.

Show comment
Hide comment
@jakobj

jakobj Mar 20, 2017

Contributor

Thanks for the input @apeyser. If I understand you correctly, I should remove the steps member in the Time class, the rest can stay?

Contributor

jakobj commented Mar 20, 2017

Thanks for the input @apeyser. If I understand you correctly, I should remove the steps member in the Time class, the rest can stay?

@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser Mar 21, 2017

Contributor

@apeyser Is you poster available online somewhere? It would be very useful for reference.

Contributor

heplesser commented Mar 21, 2017

@apeyser Is you poster available online somewhere? It would be very useful for reference.

@apeyser

This comment has been minimized.

Show comment
Hide comment
@apeyser

apeyser Mar 21, 2017

Contributor

@heplesser https://juser.fz-juelich.de/record/256297/files/poster.pdf , just search alexander peyser sfn 2015 on google scholar

@jakobj Yes -- I think storing the inversions is a very good idea -- someone should check the boundary conditions though, whether you lose precision for extreme values (probably not, but needs checked). For the statics, particularly good --- I didn't do it originally only because they changes where already so big, I didn't want to confuse matters (if you look, some of it is that way already).

Contributor

apeyser commented Mar 21, 2017

@heplesser https://juser.fz-juelich.de/record/256297/files/poster.pdf , just search alexander peyser sfn 2015 on google scholar

@jakobj Yes -- I think storing the inversions is a very good idea -- someone should check the boundary conditions though, whether you lose precision for extreme values (probably not, but needs checked). For the statics, particularly good --- I didn't do it originally only because they changes where already so big, I didn't want to confuse matters (if you look, some of it is that way already).

@jakobj

This comment has been minimized.

Show comment
Hide comment
@jakobj

jakobj Mar 27, 2017

Contributor

Here are some benchmarks of the proposed changes. Sorry for the labels: div3 = all changes as proposed, div4 = all changes as proposed without the extra member in the time objects. These seem to differ from your data @apeyser, as the additional member in Time objects leads to a decrease in runtime, not an increase. something we should discuss in today's meeting.
JUQUEEN_master_jemalloc_git92d53f9_master_jemalloc_nodiv3_git0bd90b7_master_jemalloc_nodiv4_gitc2bd0b4_bympiprocesses.pdf

Contributor

jakobj commented Mar 27, 2017

Here are some benchmarks of the proposed changes. Sorry for the labels: div3 = all changes as proposed, div4 = all changes as proposed without the extra member in the time objects. These seem to differ from your data @apeyser, as the additional member in Time objects leads to a decrease in runtime, not an increase. something we should discuss in today's meeting.
JUQUEEN_master_jemalloc_git92d53f9_master_jemalloc_nodiv3_git0bd90b7_master_jemalloc_nodiv4_gitc2bd0b4_bympiprocesses.pdf

@apeyser

This comment has been minimized.

Show comment
Hide comment
@apeyser

apeyser Mar 28, 2017

Contributor

@jakobj What's the difference between A & B?

Contributor

apeyser commented Mar 28, 2017

@jakobj What's the difference between A & B?

@jakobj

This comment has been minimized.

Show comment
Hide comment
@jakobj

jakobj Mar 28, 2017

Contributor

Right, sorry. (A) is run time, (B) is build time.

Contributor

jakobj commented Mar 28, 2017

Right, sorry. (A) is run time, (B) is build time.

@apeyser

This comment has been minimized.

Show comment
Hide comment
@apeyser

apeyser Apr 5, 2017

Contributor

@jakobj
So, to summarize: there's a small but significant (< 20%) improvement in run time by removing divisions and adding a caching member in runtime; but the difference between adding and not adding the caching of the step time is insignificant. In build time, there a small improvement by the current code (10%?) and the difference between div3 and div4 is insignificant.

Is this correct? If so, I'd say it worth removing the division but not adding the complexity of the caching. I don't think this conflicts with my results -- in my results, we removed two extra members (both steps and ms), and Wolfram had eliminated all together the time object construction for event dispatch. (That was the single biggest problem, that the fat time object was being passed, constructed, extracted, reconstructed with an integer + floating point division, and then repeated).

These changes wouldn't affect the event dispatch, because now the code path is completely gone, and the conversion from int tics to int steps is a fraction of the computational cost from int tics to int steps and float ms. Thus the two biggest problems are unaffected here.

I'm though curious about B and why the build time increases at all. That shouldn't happen from what I know of the functionality. Something's fishy. Maybe during building time there's a lot of operations on time, so most steps_ computes are thrown away, while during runtime it's a large number of identical times.

Contributor

apeyser commented Apr 5, 2017

@jakobj
So, to summarize: there's a small but significant (< 20%) improvement in run time by removing divisions and adding a caching member in runtime; but the difference between adding and not adding the caching of the step time is insignificant. In build time, there a small improvement by the current code (10%?) and the difference between div3 and div4 is insignificant.

Is this correct? If so, I'd say it worth removing the division but not adding the complexity of the caching. I don't think this conflicts with my results -- in my results, we removed two extra members (both steps and ms), and Wolfram had eliminated all together the time object construction for event dispatch. (That was the single biggest problem, that the fat time object was being passed, constructed, extracted, reconstructed with an integer + floating point division, and then repeated).

These changes wouldn't affect the event dispatch, because now the code path is completely gone, and the conversion from int tics to int steps is a fraction of the computational cost from int tics to int steps and float ms. Thus the two biggest problems are unaffected here.

I'm though curious about B and why the build time increases at all. That shouldn't happen from what I know of the functionality. Something's fishy. Maybe during building time there's a lot of operations on time, so most steps_ computes are thrown away, while during runtime it's a large number of identical times.

Show outdated Hide outdated nestkernel/archiving_node.cpp
Show outdated Hide outdated nestkernel/event.cpp
@@ -473,6 +485,7 @@ class Time
{

This comment has been minimized.

@apeyser

apeyser Apr 5, 2017

Contributor

This is where you're getting extra complexity that will also cost performance: anyplace where you have a loop around time += n secs you're going to compute unneeded steps. It's better to cache as in event.h, where you know you need to keep the value, and not everywhere in the code where, if you're lucky, it all balances out to zero, or gives you sometimes a gain, and sometimes a loss.

@apeyser

apeyser Apr 5, 2017

Contributor

This is where you're getting extra complexity that will also cost performance: anyplace where you have a loop around time += n secs you're going to compute unneeded steps. It's better to cache as in event.h, where you know you need to keep the value, and not everywhere in the code where, if you're lucky, it all balances out to zero, or gives you sometimes a gain, and sometimes a loss.

This comment has been minimized.

@heplesser

heplesser Apr 7, 2017

Contributor

@apeyser I am a bit confused---did this comment end up in the wrong place?

@heplesser

heplesser Apr 7, 2017

Contributor

@apeyser I am a bit confused---did this comment end up in the wrong place?

This comment has been minimized.

@apeyser

apeyser Apr 7, 2017

Contributor

In commit d05335a it appears in the right place... but not in the PR discussion it seems. To reduce the confusion I created, I was referring to calculate_steps() in Time(tic_t) and Time(). By caching there, under some usage patterns, you'll have a huge number of caching steps that are never used. For example, in vector<Time> or in temporaries created in computations.

But I believe downstream we've already agreed to just not do this, and do lazy evaluation in the Event class.

@apeyser

apeyser Apr 7, 2017

Contributor

In commit d05335a it appears in the right place... but not in the PR discussion it seems. To reduce the confusion I created, I was referring to calculate_steps() in Time(tic_t) and Time(). By caching there, under some usage patterns, you'll have a huge number of caching steps that are never used. For example, in vector<Time> or in temporaries created in computations.

But I believe downstream we've already agreed to just not do this, and do lazy evaluation in the Event class.

Show outdated Hide outdated nestkernel/nest_time.h
Show outdated Hide outdated nestkernel/nest_time.h

@apeyser apeyser requested a review from gtrensch Apr 5, 2017

@heplesser

@jakobj Mostly fine, but some improvements required, see in the text. And it would be good to understand why building becomes slower.

Show outdated Hide outdated models/stdp_pl_connection_hom.cpp
Show outdated Hide outdated models/stdp_pl_connection_hom.cpp
Show outdated Hide outdated models/stdp_pl_connection_hom.h
Show outdated Hide outdated nestkernel/archiving_node.cpp
Show outdated Hide outdated nestkernel/event.cpp
Show outdated Hide outdated nestkernel/event.h
@@ -162,7 +165,7 @@ Time::fromstamp( Time::ms_stamp t )
// intended ones.
tic_t n = static_cast< tic_t >( t.t * Range::TICS_PER_MS );
n -= ( n % Range::TICS_PER_STEP );
long s = n / Range::TICS_PER_STEP;
long s = n * Range::TICS_PER_STEP_INV;
double ms = s * Range::MS_PER_STEP;

This comment has been minimized.

@heplesser

heplesser Apr 5, 2017

Contributor

s and ms could both be const.

@heplesser

heplesser Apr 5, 2017

Contributor

s and ms could both be const.

This comment has been minimized.

@apeyser

apeyser Apr 6, 2017

Contributor

What we need here is just MS_PER_TIC (MS_PER_STEP * STEP_PER_TIC).

This still seems like an overcomplex as per the comment on lines 160 + 161, a computation with ceil should be sufficient, but there's corner cases I haven't checked.

@apeyser

apeyser Apr 6, 2017

Contributor

What we need here is just MS_PER_TIC (MS_PER_STEP * STEP_PER_TIC).

This still seems like an overcomplex as per the comment on lines 160 + 161, a computation with ceil should be sufficient, but there's corner cases I haven't checked.

This comment has been minimized.

@gtrensch

gtrensch Apr 7, 2017

Contributor

I agree with Alex. To convert milliseconds into tics to step boundary, something like this
ceil( ms * STEPS_PER_MS ) * TICS_PER_STEP
will do the same, I believe. Also preserving infinity is an issue here. But however, I think this goes beyond this PR and would be a refactoring task.

@gtrensch

gtrensch Apr 7, 2017

Contributor

I agree with Alex. To convert milliseconds into tics to step boundary, something like this
ceil( ms * STEPS_PER_MS ) * TICS_PER_STEP
will do the same, I believe. Also preserving infinity is an issue here. But however, I think this goes beyond this PR and would be a refactoring task.

This comment has been minimized.

@apeyser

apeyser Apr 7, 2017

Contributor

@heplesser @jakobj @gtrensch Agreed with guido -- I suggested later on that we make another PR with the changes in Time that aren't absolutely connected with the caching in Events or inverting the div operations.

@apeyser

apeyser Apr 7, 2017

Contributor

@heplesser @jakobj @gtrensch Agreed with guido -- I suggested later on that we make another PR with the changes in Time that aren't absolutely connected with the caching in Events or inverting the div operations.

This comment has been minimized.

@heplesser

heplesser Apr 7, 2017

Contributor

Agreed.

@heplesser

heplesser Apr 7, 2017

Contributor

Agreed.

Show outdated Hide outdated nestkernel/nest_time.h
Show outdated Hide outdated nestkernel/nest_time.h
Show outdated Hide outdated nestkernel/nest_time.h
@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser Apr 5, 2017

Contributor

@apeyser @jakobj I think I understand now why we see the slight increase in build times: For each connection created, sender->send_test_event() creates an Event object, which creates a Time object stamp_. In the current version, this requires computing the step_ value of the Time object, which is never used in connectivity testing. Since this is done once for every single synapse, it hurts.

Furthermore, the current solution now caches the step count twice in each Event: once inside the Time stamp_ and once explicitly as steps_. This makes little sense.

The reason that we save time by caching during event delivery is that the spike time is set once and the event is then sent to all spike targets on the local process. I believe this is the only case in which the stamps value will be re-used many times and where caching thus makes sense.

Therefore, I would propose the following approach:

  • The Time class does not cache the steps value; it just replaces division by multiplication; this pays off, since the scaling factor needs to be recomputed only when the resolution changes.
  • The Event class caches the step value of the time stamp using lazy evaluation.
  • In the Event class, we can exploit the fact that events only occur for t > 0, so that we can use steps_ == 0 as invalid marker (or -1 to be even more on the safe side).
    This will avoid the unnecessary computation during the build phase and still ensure gains during the simulation phase.
Contributor

heplesser commented Apr 5, 2017

@apeyser @jakobj I think I understand now why we see the slight increase in build times: For each connection created, sender->send_test_event() creates an Event object, which creates a Time object stamp_. In the current version, this requires computing the step_ value of the Time object, which is never used in connectivity testing. Since this is done once for every single synapse, it hurts.

Furthermore, the current solution now caches the step count twice in each Event: once inside the Time stamp_ and once explicitly as steps_. This makes little sense.

The reason that we save time by caching during event delivery is that the spike time is set once and the event is then sent to all spike targets on the local process. I believe this is the only case in which the stamps value will be re-used many times and where caching thus makes sense.

Therefore, I would propose the following approach:

  • The Time class does not cache the steps value; it just replaces division by multiplication; this pays off, since the scaling factor needs to be recomputed only when the resolution changes.
  • The Event class caches the step value of the time stamp using lazy evaluation.
  • In the Event class, we can exploit the fact that events only occur for t > 0, so that we can use steps_ == 0 as invalid marker (or -1 to be even more on the safe side).
    This will avoid the unnecessary computation during the build phase and still ensure gains during the simulation phase.
@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser Apr 5, 2017

Contributor

@jakobj Which benchmark did you use?

Contributor

heplesser commented Apr 5, 2017

@jakobj Which benchmark did you use?

@heplesser heplesser removed the request for review from gtrensch Apr 5, 2017

@@ -162,7 +165,7 @@ Time::fromstamp( Time::ms_stamp t )
// intended ones.
tic_t n = static_cast< tic_t >( t.t * Range::TICS_PER_MS );
n -= ( n % Range::TICS_PER_STEP );
long s = n / Range::TICS_PER_STEP;
long s = n * Range::TICS_PER_STEP_INV;
double ms = s * Range::MS_PER_STEP;

This comment has been minimized.

@apeyser

apeyser Apr 6, 2017

Contributor

What we need here is just MS_PER_TIC (MS_PER_STEP * STEP_PER_TIC).

This still seems like an overcomplex as per the comment on lines 160 + 161, a computation with ceil should be sufficient, but there's corner cases I haven't checked.

@apeyser

apeyser Apr 6, 2017

Contributor

What we need here is just MS_PER_TIC (MS_PER_STEP * STEP_PER_TIC).

This still seems like an overcomplex as per the comment on lines 160 + 161, a computation with ceil should be sufficient, but there's corner cases I haven't checked.

Show outdated Hide outdated nestkernel/nest_time.cpp
@@ -77,7 +78,7 @@ Time::compute_max()
const tic_t tmax = std::numeric_limits< tic_t >::max();
tic_t tics;
if ( lmax < tmax / Range::TICS_PER_STEP ) // step size is limiting factor
if ( lmax < tmax * Range::TICS_PER_STEP_INV ) // step size is limiting factor

This comment has been minimized.

@apeyser

apeyser Apr 6, 2017

Contributor

And we probably don't need to bother with changing these, or inside limit, or inside set_resolution and so on... just in the code that gets deeply looped, which is Time constructors and operators on Time objects.

@apeyser

apeyser Apr 6, 2017

Contributor

And we probably don't need to bother with changing these, or inside limit, or inside set_resolution and so on... just in the code that gets deeply looped, which is Time constructors and operators on Time objects.

This comment has been minimized.

@heplesser

heplesser Apr 7, 2017

Contributor

@apeyser This does not matter to performance here, but isn't it a good idea to do calculations consistent all over? So I think multiplication instead of division here is fine.

@heplesser

heplesser Apr 7, 2017

Contributor

@apeyser This does not matter to performance here, but isn't it a good idea to do calculations consistent all over? So I think multiplication instead of division here is fine.

This comment has been minimized.

@gtrensch

gtrensch Apr 7, 2017

Contributor

@apeyser @heplesser I'd vote for readability. In struct range{} there are e.g. definitions for STEPS_PER_MS and MS_PER_STEP. Wouldn't it be consequent to have STEPS_PER_TIC ?

@gtrensch

gtrensch Apr 7, 2017

Contributor

@apeyser @heplesser I'd vote for readability. In struct range{} there are e.g. definitions for STEPS_PER_MS and MS_PER_STEP. Wouldn't it be consequent to have STEPS_PER_TIC ?

This comment has been minimized.

@apeyser

apeyser Apr 7, 2017

Contributor

@gtrensch Agreed -- in general the "INV" of "X_PER_Y" would be clearer as "Y_PER_X".

@apeyser

apeyser Apr 7, 2017

Contributor

@gtrensch Agreed -- in general the "INV" of "X_PER_Y" would be clearer as "Y_PER_X".

This comment has been minimized.

@apeyser

apeyser Apr 7, 2017

Contributor

@heplesser This is also a question of taste. On the one hand, not changing anything that doesn't need to be changed is "clearer" since we have a minimal change; on the other hand, using all * instead of / is also "clearer".

@apeyser

apeyser Apr 7, 2017

Contributor

@heplesser This is also a question of taste. On the one hand, not changing anything that doesn't need to be changed is "clearer" since we have a minimal change; on the other hand, using all * instead of / is also "clearer".

Show outdated Hide outdated nestkernel/nest_time.h
@jakobj

This comment has been minimized.

Show comment
Hide comment
@jakobj

jakobj Apr 7, 2017

Contributor

@apeyser @heplesser thanks for all the helpful comments. I am now removing the caching in the Time objects and implement lazy evaluation in Event. Should I nevertheless implement your suggested changes in the Time class (like if ( tics == LIM_POS_INF.tics ) -> if ( tics >= LIM_POS_INF.tics ))?
For benchmarks I am using the hpc_benchmark.sli with 12500 neurons per node, 8 threads/process. I've added memory consumption for completeness (and because my benchmarking pipeline produces that panel anyway ;) ).

Contributor

jakobj commented Apr 7, 2017

@apeyser @heplesser thanks for all the helpful comments. I am now removing the caching in the Time objects and implement lazy evaluation in Event. Should I nevertheless implement your suggested changes in the Time class (like if ( tics == LIM_POS_INF.tics ) -> if ( tics >= LIM_POS_INF.tics ))?
For benchmarks I am using the hpc_benchmark.sli with 12500 neurons per node, 8 threads/process. I've added memory consumption for completeness (and because my benchmarking pipeline produces that panel anyway ;) ).

@apeyser

This comment has been minimized.

Show comment
Hide comment
@apeyser

apeyser Apr 7, 2017

Contributor

@jakobj My tendency is to make a minimal change in a change set. Maybe open another pull request with just the fixes to the Time class class?

Contributor

apeyser commented Apr 7, 2017

@jakobj My tendency is to make a minimal change in a change set. Maybe open another pull request with just the fixes to the Time class class?

jakobj added a commit to jakobj/nest-simulator that referenced this pull request Apr 12, 2017

@jakobj

This comment has been minimized.

Show comment
Hide comment
@jakobj

jakobj Apr 12, 2017

Contributor

I have now implemented the comments as far as I can see, and created a separate PR (#706) to implement all changes in the time class that are not directly related to this PR. Please have another look. I wasn't sure about the change of X_PER_Y_INV to Y_PER_X since I am fine with either.

Contributor

jakobj commented Apr 12, 2017

I have now implemented the comments as far as I can see, and created a separate PR (#706) to implement all changes in the time class that are not directly related to this PR. Please have another look. I wasn't sure about the change of X_PER_Y_INV to Y_PER_X since I am fine with either.

@jakobj

This comment has been minimized.

Show comment
Hide comment
@jakobj

jakobj Jun 12, 2017

Contributor

@heplesser @apeyser I just fixed the conflicts, so this is ready to be checked again.

Contributor

jakobj commented Jun 12, 2017

@heplesser @apeyser I just fixed the conflicts, so this is ready to be checked again.

jakobj added a commit to jakobj/nest-simulator that referenced this pull request Jun 12, 2017

Show outdated Hide outdated models/stdp_pl_connection_hom.cpp
@@ -99,7 +103,7 @@ nest::Archiving_Node::get_K_value( double t )
if ( t > history_[ i ].t_ )
{
return ( history_[ i ].Kminus_
* std::exp( ( history_[ i ].t_ - t ) / tau_minus_ ) );
* std::exp( ( history_[ i ].t_ - t ) * tau_minus_inv_ ) );

This comment has been minimized.

@apeyser

apeyser Jun 12, 2017

Contributor

I'd be curious if there's a way to state this in a more numerically stable way -- just to keep in mind for the future

@apeyser

apeyser Jun 12, 2017

Contributor

I'd be curious if there's a way to state this in a more numerically stable way -- just to keep in mind for the future

This comment has been minimized.

@heplesser

heplesser Jun 13, 2017

Contributor

@apeyser What kind of numerical stability problems do you see here?

@heplesser

heplesser Jun 13, 2017

Contributor

@apeyser What kind of numerical stability problems do you see here?

This comment has been minimized.

@apeyser

apeyser Jun 13, 2017

Contributor

We'd have to run the numbers -- but we're combining root(tau, e^t1/e^t2), so it's worth looking at the relative scales of all three numbers (how big can t1, t2 and t3 get?) to see whether we can blow up in different ways. This formulation looks reasonable -- but I don't know the cases, so i can't say of the top of my head -- just asking whether it's been thought about.

@apeyser

apeyser Jun 13, 2017

Contributor

We'd have to run the numbers -- but we're combining root(tau, e^t1/e^t2), so it's worth looking at the relative scales of all three numbers (how big can t1, t2 and t3 get?) to see whether we can blow up in different ways. This formulation looks reasonable -- but I don't know the cases, so i can't say of the top of my head -- just asking whether it's been thought about.

@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser Jun 13, 2017

Contributor

@jakobj It seems something may have gone wrong during the merge, since NEST does not build on Travis:

home/travis/build/nest/nest-simulator/nestkernel/nest_time.cpp: In function ‘std::ostream& operator<<(std::ostream&, const nest::Time&)’:
/home/travis/build/nest/nest-simulator/nestkernel/nest_time.cpp:220:1: error: expected ‘}’ at end of input
Contributor

heplesser commented Jun 13, 2017

@jakobj It seems something may have gone wrong during the merge, since NEST does not build on Travis:

home/travis/build/nest/nest-simulator/nestkernel/nest_time.cpp: In function ‘std::ostream& operator<<(std::ostream&, const nest::Time&)’:
/home/travis/build/nest/nest-simulator/nestkernel/nest_time.cpp:220:1: error: expected ‘}’ at end of input
@heplesser

@jakobj One comment and one missing curly brace left.

Show outdated Hide outdated nestkernel/event.h
Show outdated Hide outdated nestkernel/nest_time.cpp
@jakobj

This comment has been minimized.

Show comment
Hide comment
@jakobj

jakobj Jun 13, 2017

Contributor

Thanks for the input. Done implementing your proposed changes. Didn't know about mutable before - nice feature for this type of situation.

Contributor

jakobj commented Jun 13, 2017

Thanks for the input. Done implementing your proposed changes. Didn't know about mutable before - nice feature for this type of situation.

jakobj added a commit to jakobj/nest-simulator that referenced this pull request Jun 13, 2017

@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser Jun 13, 2017

Contributor

@jakobj It seems clang-format is unhappy.

Contributor

heplesser commented Jun 13, 2017

@jakobj It seems clang-format is unhappy.

@heplesser

I approve, now only Travis/clang-format needs to become happy as well.

@heplesser

This comment has been minimized.

Show comment
Hide comment
@heplesser

heplesser Jun 14, 2017

Contributor

Release notes: Performance improvements through more efficient numerics in Time

Contributor

heplesser commented Jun 14, 2017

Release notes: Performance improvements through more efficient numerics in Time

@heplesser heplesser merged commit 18bb2cc into nest:master Jun 14, 2017

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

jakobj added a commit to jakobj/nest-simulator that referenced this pull request Jun 20, 2017

apeyser added a commit that referenced this pull request Jun 29, 2017

Merge pull request #706 from jakobj/enh/cleanup_time
Minor refactoring in Time as discussed in PR #685

aserenko added a commit to aserenko/nest-simulator that referenced this pull request May 15, 2018

Conform with PR#685
Make the variable nearest_neighbor_K_value
conformant to PR #685:
multiply by tau_minus_inv_
instead of dividing by tau_minus_.

@jakobj jakobj deleted the jakobj:enh/avoid_divisions branch Sep 13, 2018

@jakobj jakobj restored the jakobj:enh/avoid_divisions branch Sep 13, 2018

@jakobj jakobj deleted the jakobj:enh/avoid_divisions branch Sep 13, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment