New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] MAINT: refactor distal/proximal stuff #146
[MRG] MAINT: refactor distal/proximal stuff #146
Conversation
Codecov Report
@@ Coverage Diff @@
## master #146 +/- ##
==========================================
- Coverage 70.31% 69.38% -0.94%
==========================================
Files 19 19
Lines 1984 1904 -80
==========================================
- Hits 1395 1321 -74
+ Misses 589 583 -6
Continue to review full report at Codecov.
|
@blakecaldwell @rythorpe one round of review here would be appreciated before I go too far down this path. The idea is the following: have a high-level method Now the hanging / loose thread is |
hnn_core/pyramidal.py
Outdated
loc='proximal', receptor='ampa', | ||
gid_src=gid_extpois, | ||
nc_dict=nc_dict, | ||
nc_list=self.ncfrom_extpois) | ||
|
||
if p_ext[self.celltype][1] > 0.0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this if condition is awkward but I didn't want to break things, so left it there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. We'll need to fix this when we remove the loops that iterate through the different feed types from the cell classes. I'm not sure why this was put here in the first place (maybe network build time?) since an nmda receptor conductance of of zero shouldn't have any effect on the simulation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this related to your PRs about turning off inputs based on the weights? or was that different?
maybe network build time?
I hope not. "Premature optimization is the root of all evil." (Knuth) Another related aphorism is make it work, make it nice, make it fast
Network build times are really small compared to simulation time and developer time (most important in my opinion). Maybe for GUI application, some of these times matter but I'd rather refactor first, then profile and worry about timing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this related to your PRs about turning off inputs based on the weights? or was that different?
It's related in principle, but I was unaware that there was a check for synaptic weight >0 within a cell class method. There is probably more to the story that we are unaware of, but we'll need to handle this more consistently in the future.
hnn_core/pyramidal.py
Outdated
@@ -219,6 +219,43 @@ def _synapse_create(self, p_syn): | |||
self.dends['apical_tuft'](0.5), **p_syn['gabaa']) | |||
|
|||
|
|||
def _connect_at_loc(self, loc, receptor, gid_src, nc_dict, nc_list): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this function only be used for connecting feeds (via artificial cells) to the given Pyr
instance? If so, we may want to reflect the specificity of this method in the name and description.
Also, am I correct in assuming that the reason for having different methods that connect real cells -> real cells vs. artificial cells -> real cells is that the latter connects a point neuron with no physical morphology to a specific synapse on a dendrite?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me get back to you on that in a day or two. I need to mess around with the code to figure out what's possible and what's not. I think we can definitely extend to Basket
cells. But I'd leave the refactoring with _connect
method to another PR.
So something like _connect_feed_at_loc
might do ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_connect_feed_at_loc
sounds good to me! We can always rename it in a later PR while refactoring the cell classes.
hnn_core/pyramidal.py
Outdated
for dend in dends: | ||
postsyns.append(getattr(self, f'{dend}_{receptor}')) | ||
|
||
for postsyn in postsyns: | ||
nc = self.parconnect_from_src(gid_src, nc_dict, postsyn) | ||
nc_list.append(nc) | ||
|
||
return postsyns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the main point here is to simplify the parreceive()
and parreceive_ext()
methods correct? If so, I definitely like how this PR focuses on consolidating redundant code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but also to make the code a bit more readable. Not just concise but more readable. We'll probably need to do a couple more iterations / PRs to get it absolutely on point.
hnn_core/pyramidal.py
Outdated
loc='proximal', receptor='ampa', | ||
gid_src=gid_extpois, | ||
nc_dict=nc_dict, | ||
nc_list=self.ncfrom_extpois) | ||
|
||
if p_ext[self.celltype][1] > 0.0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. We'll need to fix this when we remove the loops that iterate through the different feed types from the cell classes. I'm not sure why this was put here in the first place (maybe network build time?) since an nmda receptor conductance of of zero shouldn't have any effect on the simulation.
Co-authored-by: Ryan Thorpe <ryvthorpe@gmail.com>
okay I think I'm done here. Couldn't see much scope for improvement in the |
hnn_core/basket.py
Outdated
@@ -149,20 +149,14 @@ def parreceive_ext(self, type, gid, gid_dict, pos_dict, p_ext): | |||
|
|||
# connections depend on location of input - why only | |||
# for L2 basket and not L5 basket? | |||
if p_ext['loc'] == 'proximal': | |||
if p_ext['loc'] in ['proximal', 'distal']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a good catch. I do think we should make a BasketSingle._connect_feed_at_loc()
(or even better, a _Cell._connect_feed_at_loc()
) method though for consistency's sake. One of my biggest frustrations with the feed-related code is that each case takes a different code path to accomplish a common goal. This would require a few additional changes:
-
The synapse attributes defined in
BasketSingle._synapse_create()
andPyramidal._synapse_create()
should instead be appended to a single attributeBasketSingle.synapses
(orPyramidal.synapses
) of type dict. -
Instead of hardcoding
dends
in the if/else statement in_connect_feed_at_loc()
(see comment below), just access theself.synapses
property according the correct receptor type and input loc.
This way, one method can be created in _Cell
that can be implemented uniformly in all cell types that receive an input.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fair, in fact I was looking for something that already existed but couldn't find it. Are we meeting tomorrow afternoon? Should we try to implement this exact thing with pair/triplet programming? I want to keep the discussion more concrete and less "meta".
I also tagged you in a follow up PR I planned. Should I just append that PR on to this one (it's a fairly straightforward change)? It will make it easier for me to make changes on top without worrying about rebase conflicts. Or would you prefer to review it separately?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so jasmainak#1 is basically just moving parreceive_
methods up to their parent classes. I think this is manageable to merge and review in this PR.
What time are we meeting tomorrow? I'm down for another programming session, though I'm also happy to attempt implementing my proposed changes asynchronously if that would be helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay I copied what was in #jasmainak#1 into this PR then.
hnn_core/pyramidal.py
Outdated
postsyns = list() | ||
for dend in dends: | ||
postsyns.append(getattr(self, f'{dend}_{receptor}')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
postsyns = list() | |
for dend in dends: | |
postsyns.append(getattr(self, f'{dend}_{receptor}')) | |
[self.synapse[syn_key] for syn_key in self.synapses.keys() if receptor in syn_key] |
f08100d
to
55f2821
Compare
55f2821
to
4ca7f42
Compare
okay found the problem. It's a problem with Neuron, it should raise an error if a Anyhow, now I have monkeypatched it in f5c8911 |
Sorry I couldn't be of much help here. I briefly looked things over but arrived at the conclusion that some digging was necessary. So the problem was that the |
yeah so apparently |
I'm pretty much done here btw |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can tell from the old parconnect
/parrecieve
code that effort was taken to make sure the model build would parallelize nicely when run with MPI. Can you verify that building the full network model (NeuronNetwork._build()
) with MPIBackend doesn't slow down after this change?
hnn_core/basket.py
Outdated
self.soma(0.5), e=0., tau1=0.5, tau2=5.) | ||
self.synapses['soma_gabaa'] = self.syn_create( | ||
self.soma(0.5), e=-80, tau1=0.5, tau2=5.) | ||
# this is a pretty fast NMDA, no? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be clarified rather than having a comment buried in the code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
umm ... what would you like me to clarify here? I actually just copy-pasted the old comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments that ask unanswered questions aren't that useful. Who were you asking in the first place? Are you comparing to other NMDA time constants somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not my comment, cruft from old code: 41c8774#diff-8a21b8909c8d5e3dc73c03688313de0eL351 :) I can remove it if you want to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically my strategy was to trash any comment that was redundant with the code or other comments. But this one seemed a bit unique ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see. Makes complete sense why it was copied forward. However, with all that effort spent on where the comment came from, I verified that the time constant is correct. These are the values that have been used since at least 2009. See top of page 5:
https://journals.physiology.org/doi/pdf/10.1152/jn.00535.2009
This comment can be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome, thank you so much for digging into this. I removed the comment in the last commit!
def _connect_feed_at_loc(self, feed_loc, receptor, gid_src, nc_dict, | ||
nc_list): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about a test for this new function, in test_cell.py
? Can a connection be made from two _Cell
objects b/w the somas?
nc_dict = { | ||
'pos_src': pos_dict['extgauss'][gid], | ||
# index 0 is ampa weight | ||
'A_weight': p_ext[self.celltype][0], | ||
'A_delay': p_ext[self.celltype][1], # index 2 is delay | ||
'A_delay': p_ext[self.celltype][2], # index 2 is delay |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So was this a bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting, I didn't see this myself. I guess it popped out in the diff when I consolidated the methods from the two classes. It's why we sorely need tests, thanks for pushing me on that. I'll try to make a test for that method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@blakecaldwell you might want to update it in HNN as well: https://github.com/jonescompneurolab/hnn/blob/master/L2_basket.py#L187
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jasmainak
@blakecaldwell test added 96c266a |
b49005b
to
96c266a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The time it takes to build the network is approx. the same after this change. Rough benchmarking with:
diff --git a/hnn_core/neuron.py b/hnn_core/neuron.py
index 6d65afe0..18038dd3 100644
--- a/hnn_core/neuron.py
+++ b/hnn_core/neuron.py
@@ -265,7 +265,10 @@ class NeuronNetwork(object):
# the NEURON hoc objects and the corresonding python references
# initialized by _ArtificialCell()
self._feed_cells = []
+ import time
+ start = time.process_time()
self._build()
+ print("Build took %s" % (time.process_time() - start))
def _build(self):
"""Building the network in NEURON."""
Before change (building over 16 processes):
In [4]: from hnn_core import MPIBackend
...:
...: with MPIBackend(n_procs=16, mpi_cmd='mpiexec'):
...: simulate_dipole(net, n_trials=1)
...:
MPI will run over 16 processes
Running 1 trials...
numprocs=16
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Building the NEURON model
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
[Done]
Build took 0.8585520000000001
Build took 0.856422
Build took 0.871434
Build took 0.8631059999999999
Build took 0.8637900000000001
Build took 0.8640939999999999
Build took 0.868628
Build took 0.8646100000000001
Build took 0.88896
Build took 0.89165
Build took 0.8964800000000002
Build took 0.925384
Build took 0.928272
Build took 0.923608
Build took 0.908498
Build took 0.9217559999999998
running trial 1 on 16 cores
Simulation time: 0.03 ms...
Simulation time: 10.0 ms...
Simulation time: 20.0 ms...
Simulation time: 30.0 ms...
Simulation time: 40.0 ms...
Simulation time: 50.0 ms...
Simulation time: 60.0 ms...
Simulation time: 70.0 ms...
Simulation time: 80.0 ms...
Simulation time: 90.0 ms...
Simulation time: 100.0 ms...
Simulation time: 110.0 ms...
Simulation time: 120.0 ms...
Simulation time: 130.0 ms...
Simulation time: 140.0 ms...
Simulation time: 150.0 ms...
Simulation time: 160.0 ms...
After change:
MPI will run over 16 processes
Running 1 trials...
numprocs=16
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Building the NEURON model
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
[Done]
Build took 0.887724
Build took 0.86086
Build took 0.867226
Build took 0.873936
Build took 0.879658
Build took 0.8607040000000001
Build took 0.9342299999999999
Build took 0.8838279999999998
Build took 0.885464
Build took 0.8951979999999999
Build took 0.9003779999999999
Build took 0.9074359999999999
Build took 0.895248
Build took 0.921478
Build took 0.935244
Build took 0.9342900000000001
running trial 1 on 16 cores
Simulation time: 0.03 ms...
Simulation time: 10.0 ms...
Simulation time: 20.0 ms...
Simulation time: 30.0 ms...
Simulation time: 40.0 ms...
Simulation time: 50.0 ms...
Simulation time: 60.0 ms...
Simulation time: 70.0 ms...
Simulation time: 80.0 ms...
Simulation time: 90.0 ms...
Simulation time: 100.0 ms...
Simulation time: 110.0 ms...
Simulation time: 120.0 ms...
Simulation time: 130.0 ms...
Simulation time: 140.0 ms...
Simulation time: 150.0 ms...
Simulation time: 160.0 ms...
oops sorry I forgot to check that 🙈. If build times are critical (for GUI I guess?), perhaps we could add a test to ensure that build time is always under x seconds? @rythorpe merge if you are happy! more PRs on the way :) |
hnn_core/basket.py
Outdated
self.ncfrom_common.append(self.parconnect_from_src( | ||
gid_src, nc_dict_nmda, self.soma_nmda)) | ||
self._connect_feed_at_loc( | ||
feed_loc='proximal', receptor='nmda', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jasmainak why does feed_loc='proximal'
in all of the BasketSingle._connect_feed_at_loc()
calls? I feel like I'm missing something really obvious...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very good point. So looking at the documentation of HNN, it says this for proximal:
The left schematic here shows proximal inputs which target basal dendrites of layer 2/3 and layer 5 pyramidal neurons, and somata of layer 2/3 and layer 5 interneurons.
and this for distal:
The right schematic shows distal inputs which target the distal apical dendrites of layer 5 and layer 2/3 pyramidal neurons and the somata of layer 2/3 interneurons
So, for layer 2/3 there is no difference between proximal and distal. For layer 5, you have an input drive to somata for proximal but not distal
Now if you go the file L5_basket.py (I'm using the HNN repository since that's our "gold standard" implementation) and look for 'loc'
there is no reference to it. If you go to L2_basket.py, there is an if-else clause but both branches have exactly the same lines (right??) with a cryptic comment which asks "why only L2 basket and not L5 basket". Am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with everything you said @jasmainak. It appears that there is no difference between distal and proximal in the code. It looks like the code changed from only having combined excitatory inputs to both ampa/nmda in the following two commits (in which the comment was added). Commit 317d3324 explains why there was initially an if/else clause. Then the branches became the same in commit 3eee0bd3.
I think the comment is just reflecting what you observed @jasmainak.
Additionally, a similar comment in L5_basket.py noted the difference, but the code still the same thing for proximal and distal inputs. The desired effect comes from there not being a 'L5_basket' key in p_ext
for distal evoked inputs (in paramrw.py).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should expect L2/3 basket cells to receive both proximal and distal inputs and L5 basket cells to receive only proximal inputs.
If you go to L2_basket.py, there is an if-else clause but both branches have exactly the same lines (right??)
Correct, the if/else statement is unnecessary. It looks like this was corrected in L5_basket.py.
with a cryptic comment which asks "why only L2 basket and not L5 basket". Am I missing something?
This comment is resolved by noting that HNN code does control for proximal vs. distal inputs for both the L2 and L5 basket cell cases. Both L5_basket and L2_basket have an 'if' statement that filters feed location-specific connections according to the parameter keys passed via p_ext
. Particularly note that p_unique['evdist*']
items don't contain a dict key for L5_basket
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry @blakecaldwell your previous post just loaded for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wow! this sounds like a Sherlock Holmes detective story ... thanks for the sleuthing both of you.
Let me see what I can do to improve the readability.
hnn_core/basket.py
Outdated
@@ -22,6 +31,8 @@ def __init__(self, gid, pos, cell_name='Basket'): | |||
# for height and xz plane for depth. This is opposite for model as a | |||
# whole, but convention is followed in this function ease use of gui. | |||
self.shape_soma() | |||
self.synapses = dict() | |||
self.sect_loc = dict(proximal=['soma'], distal=[]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.sect_loc
shouldn't be defined at the BasketSingle
level, but rather separately for L2Basket
and L5Basket
respectively.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the reason this is working without throwing any test errors is because by forcing all feed_loc
variables to be proximal, we're forcing parreceive_ext
to attempt running self._connect_feed_at_loc()
if the layer-specific basket cell is defined in p_src
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Moving forward, we may want to reduce the if-else
clauses. They make the code really hard to follow and prone to bugs when changing. Thanks for carefully reviewing it.
|
||
# Check if NMDA params are defined in p_src | ||
if 'L2Basket_nmda' in p_src.keys(): | ||
nc_dict_nmda = { | ||
if f'{self.name}_nmda' in p_src.keys(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in here (reference to #146 (comment)).
I pushed 1c71219. Let me know what you guys think. It doesn't change anything functionally -- just helps with the readability. Because the functionality is taken care through the Also, before you merge, do confirm that I got this right:
It's how the code in the |
I like the idea of a performance regression test. It can be easy to change something that is undetectable on a single core, but blows up during parallelization. I'd actually like to do this for network building and simulations. |
This looks really good, thanks @jasmainak.
Yes.
This is true per |
Just a tiny adjustment to the code inspired by #129 .
We need to develop more high level interfaces but this is a start. The challenge in this PR would be to keep it under control and not touch everything :)
will add a similar method to
L2Pyr
and then basket cells. Then it's good to go