Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge latest #13

Merged
merged 20 commits into from
Feb 9, 2018
Merged

Merge latest #13

merged 20 commits into from
Feb 9, 2018

Conversation

pgleeson
Copy link
Owner

@pgleeson pgleeson commented Feb 9, 2018

No description provided.

apdavison and others added 20 commits December 4, 2017 17:27
Minor changes to improve export of inputs in PyNN to NeuroML
* Proposed fix for #512

* Removed print statements

* Add test case for #512

* Updates test for #512

* Implements suggestions for PR#517

* Implements suggestions for PR#517 - Update

* Implements suggestions for PR#517 - Update

* Updates _check_step_times() for nest
More small changes for nml2 export of inputs from PyNN
There are two changes here:
(1) previously, a random number generator with `parallel_safe=False` would always draw a reduced number of values when run with >1 MPI processes, according to the number of processes, unless the `mask_local` parameter was set to `False`. Now, a mask must be explicitly provided if you want to draw a reduced number of values (i.e. only those values consumed on that node), `mask` (renamed from `mask_local`) is either an array, or None, and can no longer take the value `False`.  
(2) moved the lazyarray behaviour for `RandomDistribution` from the lazyarray package to PyNN, to keep all the rng-related logic in one place.
Add `ArrayParameter` class, and use it to simplify GIF_cond_exp model
Simplify interaction of random number generators and lazyarray
@pgleeson pgleeson merged commit 647dded into pgleeson:master Feb 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants