Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] MAINT: refactor distal/proximal stuff #146

Merged
merged 18 commits into from Aug 28, 2020

Conversation

jasmainak
Copy link
Collaborator

@jasmainak jasmainak commented Aug 23, 2020

Just a tiny adjustment to the code inspired by #129 .

We need to develop more high level interfaces but this is a start. The challenge in this PR would be to keep it under control and not touch everything :)

will add a similar method to L2Pyr and then basket cells. Then it's good to go

@codecov-commenter
Copy link

codecov-commenter commented Aug 23, 2020

Codecov Report

Merging #146 into master will decrease coverage by 0.93%.
The diff coverage is 93.57%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #146      +/-   ##
==========================================
- Coverage   70.31%   69.38%   -0.94%     
==========================================
  Files          19       19              
  Lines        1984     1904      -80     
==========================================
- Hits         1395     1321      -74     
+ Misses        589      583       -6     
Impacted Files Coverage Δ
hnn_core/cell.py 72.58% <76.92%> (+0.15%) ⬆️
hnn_core/pyramidal.py 97.67% <94.00%> (+1.03%) ⬆️
hnn_core/basket.py 95.52% <96.42%> (+1.29%) ⬆️
hnn_core/tests/test_cell.py 100.00% <100.00%> (ø)
hnn_core/dipole.py 70.31% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 60b6015...1c71219. Read the comment docs.

@jasmainak
Copy link
Collaborator Author

@blakecaldwell @rythorpe one round of review here would be appreciated before I go too far down this path.

The idea is the following: have a high-level method _connect_at_loc that lets us connect at proximal or distal dendrites because we always connect in groups of dendrites. This method internally calls parconnect_from_src which in turn calls gid_connect.

Now the hanging / loose thread is _connect. It also calls parconnect_from_src but it's more for cell-to-cell connections. It should probably go away ... I can do it in this PR or another PR, don't want to overwhelm the review process.

loc='proximal', receptor='ampa',
gid_src=gid_extpois,
nc_dict=nc_dict,
nc_list=self.ncfrom_extpois)

if p_ext[self.celltype][1] > 0.0:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this if condition is awkward but I didn't want to break things, so left it there

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. We'll need to fix this when we remove the loops that iterate through the different feed types from the cell classes. I'm not sure why this was put here in the first place (maybe network build time?) since an nmda receptor conductance of of zero shouldn't have any effect on the simulation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this related to your PRs about turning off inputs based on the weights? or was that different?

maybe network build time?

I hope not. "Premature optimization is the root of all evil." (Knuth) Another related aphorism is make it work, make it nice, make it fast

Network build times are really small compared to simulation time and developer time (most important in my opinion). Maybe for GUI application, some of these times matter but I'd rather refactor first, then profile and worry about timing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this related to your PRs about turning off inputs based on the weights? or was that different?

It's related in principle, but I was unaware that there was a check for synaptic weight >0 within a cell class method. There is probably more to the story that we are unaware of, but we'll need to handle this more consistently in the future.

hnn_core/pyramidal.py Outdated Show resolved Hide resolved
@@ -219,6 +219,43 @@ def _synapse_create(self, p_syn):
self.dends['apical_tuft'](0.5), **p_syn['gabaa'])


def _connect_at_loc(self, loc, receptor, gid_src, nc_dict, nc_list):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this function only be used for connecting feeds (via artificial cells) to the given Pyr instance? If so, we may want to reflect the specificity of this method in the name and description.

Also, am I correct in assuming that the reason for having different methods that connect real cells -> real cells vs. artificial cells -> real cells is that the latter connects a point neuron with no physical morphology to a specific synapse on a dendrite?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me get back to you on that in a day or two. I need to mess around with the code to figure out what's possible and what's not. I think we can definitely extend to Basket cells. But I'd leave the refactoring with _connect method to another PR.

So something like _connect_feed_at_loc might do ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_connect_feed_at_loc sounds good to me! We can always rename it in a later PR while refactoring the cell classes.

Comment on lines 249 to 256
for dend in dends:
postsyns.append(getattr(self, f'{dend}_{receptor}'))

for postsyn in postsyns:
nc = self.parconnect_from_src(gid_src, nc_dict, postsyn)
nc_list.append(nc)

return postsyns
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the main point here is to simplify the parreceive() and parreceive_ext() methods correct? If so, I definitely like how this PR focuses on consolidating redundant code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but also to make the code a bit more readable. Not just concise but more readable. We'll probably need to do a couple more iterations / PRs to get it absolutely on point.

loc='proximal', receptor='ampa',
gid_src=gid_extpois,
nc_dict=nc_dict,
nc_list=self.ncfrom_extpois)

if p_ext[self.celltype][1] > 0.0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. We'll need to fix this when we remove the loops that iterate through the different feed types from the cell classes. I'm not sure why this was put here in the first place (maybe network build time?) since an nmda receptor conductance of of zero shouldn't have any effect on the simulation.

@jasmainak jasmainak changed the title WIP MAINT: refactor distal/proximal stuff [MRG] MAINT: refactor distal/proximal stuff Aug 24, 2020
@jasmainak
Copy link
Collaborator Author

okay I think I'm done here. Couldn't see much scope for improvement in the Basket class except this commit 6cd00a8. I can't believe that the if-else clause had duplicate statement. I visually verified thrice, but please check :-)

@@ -149,20 +149,14 @@ def parreceive_ext(self, type, gid, gid_dict, pos_dict, p_ext):

# connections depend on location of input - why only
# for L2 basket and not L5 basket?
if p_ext['loc'] == 'proximal':
if p_ext['loc'] in ['proximal', 'distal']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a good catch. I do think we should make a BasketSingle._connect_feed_at_loc() (or even better, a _Cell._connect_feed_at_loc()) method though for consistency's sake. One of my biggest frustrations with the feed-related code is that each case takes a different code path to accomplish a common goal. This would require a few additional changes:

  1. The synapse attributes defined in BasketSingle._synapse_create() and Pyramidal._synapse_create() should instead be appended to a single attribute BasketSingle.synapses (or Pyramidal.synapses) of type dict.

  2. Instead of hardcoding dends in the if/else statement in _connect_feed_at_loc() (see comment below), just access the self.synapses property according the correct receptor type and input loc.

This way, one method can be created in _Cell that can be implemented uniformly in all cell types that receive an input.

Copy link
Collaborator Author

@jasmainak jasmainak Aug 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fair, in fact I was looking for something that already existed but couldn't find it. Are we meeting tomorrow afternoon? Should we try to implement this exact thing with pair/triplet programming? I want to keep the discussion more concrete and less "meta".

I also tagged you in a follow up PR I planned. Should I just append that PR on to this one (it's a fairly straightforward change)? It will make it easier for me to make changes on top without worrying about rebase conflicts. Or would you prefer to review it separately?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, so jasmainak#1 is basically just moving parreceive_ methods up to their parent classes. I think this is manageable to merge and review in this PR.

What time are we meeting tomorrow? I'm down for another programming session, though I'm also happy to attempt implementing my proposed changes asynchronously if that would be helpful.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay I copied what was in #jasmainak#1 into this PR then.

Comment on lines 248 to 250
postsyns = list()
for dend in dends:
postsyns.append(getattr(self, f'{dend}_{receptor}'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
postsyns = list()
for dend in dends:
postsyns.append(getattr(self, f'{dend}_{receptor}'))
[self.synapse[syn_key] for syn_key in self.synapses.keys() if receptor in syn_key]

@jasmainak
Copy link
Collaborator Author

jasmainak commented Aug 25, 2020

@rythorpe I implemented what you suggested in d4e2149

i'm a bit bamboozled that the test fails. Do you see anything in the diff that is off?

@jasmainak jasmainak force-pushed the improve_connect branch 4 times, most recently from f08100d to 55f2821 Compare August 25, 2020 22:35
@jasmainak
Copy link
Collaborator Author

okay found the problem. It's a problem with Neuron, it should raise an error if a float is passed. Just lost 2 hours tracking down the issue.

Anyhow, now I have monkeypatched it in f5c8911

@rythorpe
Copy link
Contributor

okay found the problem. It's a problem with Neuron, it should raise an error if a float is passed. Just lost 2 hours tracking down the issue.

Anyhow, now I have monkeypatched it in f5c8911

Sorry I couldn't be of much help here. I briefly looked things over but arrived at the conclusion that some digging was necessary. So the problem was that the secloc argument has to be of an instance of nrn.Segment?

@jasmainak
Copy link
Collaborator Author

yeah so apparently Exp2Syn also accepts floating point numbers. I passed 0.5 instead of soma(0.5) by mistake. It should have complained but it didn't.

@jasmainak
Copy link
Collaborator Author

I'm pretty much done here btw

Copy link
Member

@blakecaldwell blakecaldwell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can tell from the old parconnect/parrecieve code that effort was taken to make sure the model build would parallelize nicely when run with MPI. Can you verify that building the full network model (NeuronNetwork._build()) with MPIBackend doesn't slow down after this change?

self.soma(0.5), e=0., tau1=0.5, tau2=5.)
self.synapses['soma_gabaa'] = self.syn_create(
self.soma(0.5), e=-80, tau1=0.5, tau2=5.)
# this is a pretty fast NMDA, no?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be clarified rather than having a comment buried in the code?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

umm ... what would you like me to clarify here? I actually just copy-pasted the old comment

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments that ask unanswered questions aren't that useful. Who were you asking in the first place? Are you comparing to other NMDA time constants somewhere?

41c8774#diff-faaeadb2c140c4e4e34881bb5f31ae87R52

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not my comment, cruft from old code: 41c8774#diff-8a21b8909c8d5e3dc73c03688313de0eL351 :) I can remove it if you want to.

Copy link
Collaborator Author

@jasmainak jasmainak Aug 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically my strategy was to trash any comment that was redundant with the code or other comments. But this one seemed a bit unique ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see. Makes complete sense why it was copied forward. However, with all that effort spent on where the comment came from, I verified that the time constant is correct. These are the values that have been used since at least 2009. See top of page 5:
https://journals.physiology.org/doi/pdf/10.1152/jn.00535.2009

This comment can be removed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome, thank you so much for digging into this. I removed the comment in the last commit!

Comment on lines +165 to +166
def _connect_feed_at_loc(self, feed_loc, receptor, gid_src, nc_dict,
nc_list):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about a test for this new function, in test_cell.py? Can a connection be made from two _Cell objects b/w the somas?

nc_dict = {
'pos_src': pos_dict['extgauss'][gid],
# index 0 is ampa weight
'A_weight': p_ext[self.celltype][0],
'A_delay': p_ext[self.celltype][1], # index 2 is delay
'A_delay': p_ext[self.celltype][2], # index 2 is delay
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So was this a bug?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting, I didn't see this myself. I guess it popped out in the diff when I consolidated the methods from the two classes. It's why we sorely need tests, thanks for pushing me on that. I'll try to make a test for that method.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jasmainak

@jasmainak
Copy link
Collaborator Author

jasmainak commented Aug 27, 2020

@blakecaldwell test added 96c266a

Copy link
Member

@blakecaldwell blakecaldwell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The time it takes to build the network is approx. the same after this change. Rough benchmarking with:

diff --git a/hnn_core/neuron.py b/hnn_core/neuron.py
index 6d65afe0..18038dd3 100644
--- a/hnn_core/neuron.py
+++ b/hnn_core/neuron.py
@@ -265,7 +265,10 @@ class NeuronNetwork(object):
         # the NEURON hoc objects and the corresonding python references
         # initialized by _ArtificialCell()
         self._feed_cells = []
+        import time
+        start = time.process_time()
         self._build()
+        print("Build took %s" % (time.process_time() - start))
 
     def _build(self):
         """Building the network in NEURON."""

Before change (building over 16 processes):

In [4]: from hnn_core import MPIBackend 
   ...:  
   ...: with MPIBackend(n_procs=16, mpi_cmd='mpiexec'): 
   ...:     simulate_dipole(net, n_trials=1) 
   ...:                                                                                                                                                                                                               
MPI will run over 16 processes
Running 1 trials...
numprocs=16
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Building the NEURON model
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
[Done]
Build took 0.8585520000000001
Build took 0.856422
Build took 0.871434
Build took 0.8631059999999999
Build took 0.8637900000000001
Build took 0.8640939999999999
Build took 0.868628
Build took 0.8646100000000001
Build took 0.88896
Build took 0.89165
Build took 0.8964800000000002
Build took 0.925384
Build took 0.928272
Build took 0.923608
Build took 0.908498
Build took 0.9217559999999998
running trial 1 on 16 cores
Simulation time: 0.03 ms...
Simulation time: 10.0 ms...
Simulation time: 20.0 ms...
Simulation time: 30.0 ms...
Simulation time: 40.0 ms...
Simulation time: 50.0 ms...
Simulation time: 60.0 ms...
Simulation time: 70.0 ms...
Simulation time: 80.0 ms...
Simulation time: 90.0 ms...
Simulation time: 100.0 ms...
Simulation time: 110.0 ms...
Simulation time: 120.0 ms...
Simulation time: 130.0 ms...
Simulation time: 140.0 ms...
Simulation time: 150.0 ms...
Simulation time: 160.0 ms...

After change:

MPI will run over 16 processes
Running 1 trials...
numprocs=16
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Building the NEURON model
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
Loading custom mechanism files from /Users/blake/repos/hnn-core/hnn_core/mod/x86_64/.libs/libnrnmech.so
[Done]
Build took 0.887724
Build took 0.86086
Build took 0.867226
Build took 0.873936
Build took 0.879658
Build took 0.8607040000000001
Build took 0.9342299999999999
Build took 0.8838279999999998
Build took 0.885464
Build took 0.8951979999999999
Build took 0.9003779999999999
Build took 0.9074359999999999
Build took 0.895248
Build took 0.921478
Build took 0.935244
Build took 0.9342900000000001
running trial 1 on 16 cores
Simulation time: 0.03 ms...
Simulation time: 10.0 ms...
Simulation time: 20.0 ms...
Simulation time: 30.0 ms...
Simulation time: 40.0 ms...
Simulation time: 50.0 ms...
Simulation time: 60.0 ms...
Simulation time: 70.0 ms...
Simulation time: 80.0 ms...
Simulation time: 90.0 ms...
Simulation time: 100.0 ms...
Simulation time: 110.0 ms...
Simulation time: 120.0 ms...
Simulation time: 130.0 ms...
Simulation time: 140.0 ms...
Simulation time: 150.0 ms...
Simulation time: 160.0 ms...

@jasmainak
Copy link
Collaborator Author

oops sorry I forgot to check that 🙈. If build times are critical (for GUI I guess?), perhaps we could add a test to ensure that build time is always under x seconds?

@rythorpe merge if you are happy! more PRs on the way :)

self.ncfrom_common.append(self.parconnect_from_src(
gid_src, nc_dict_nmda, self.soma_nmda))
self._connect_feed_at_loc(
feed_loc='proximal', receptor='nmda',
Copy link
Contributor

@rythorpe rythorpe Aug 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jasmainak why does feed_loc='proximal' in all of the BasketSingle._connect_feed_at_loc() calls? I feel like I'm missing something really obvious...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very good point. So looking at the documentation of HNN, it says this for proximal:

The left schematic here shows proximal inputs which target basal dendrites of layer 2/3 and layer 5 pyramidal neurons, and somata of layer 2/3 and layer 5 interneurons.

and this for distal:

The right schematic shows distal inputs which target the distal apical dendrites of layer 5 and layer 2/3 pyramidal neurons and the somata of layer 2/3 interneurons

So, for layer 2/3 there is no difference between proximal and distal. For layer 5, you have an input drive to somata for proximal but not distal

Now if you go the file L5_basket.py (I'm using the HNN repository since that's our "gold standard" implementation) and look for 'loc' there is no reference to it. If you go to L2_basket.py, there is an if-else clause but both branches have exactly the same lines (right??) with a cryptic comment which asks "why only L2 basket and not L5 basket". Am I missing something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with everything you said @jasmainak. It appears that there is no difference between distal and proximal in the code. It looks like the code changed from only having combined excitatory inputs to both ampa/nmda in the following two commits (in which the comment was added). Commit 317d3324 explains why there was initially an if/else clause. Then the branches became the same in commit 3eee0bd3.

I think the comment is just reflecting what you observed @jasmainak.

Additionally, a similar comment in L5_basket.py noted the difference, but the code still the same thing for proximal and distal inputs. The desired effect comes from there not being a 'L5_basket' key in p_ext for distal evoked inputs (in paramrw.py).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we should expect L2/3 basket cells to receive both proximal and distal inputs and L5 basket cells to receive only proximal inputs.

If you go to L2_basket.py, there is an if-else clause but both branches have exactly the same lines (right??)

Correct, the if/else statement is unnecessary. It looks like this was corrected in L5_basket.py.

with a cryptic comment which asks "why only L2 basket and not L5 basket". Am I missing something?

This comment is resolved by noting that HNN code does control for proximal vs. distal inputs for both the L2 and L5 basket cell cases. Both L5_basket and L2_basket have an 'if' statement that filters feed location-specific connections according to the parameter keys passed via p_ext. Particularly note that p_unique['evdist*'] items don't contain a dict key for L5_basket.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @blakecaldwell your previous post just loaded for me.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow! this sounds like a Sherlock Holmes detective story ... thanks for the sleuthing both of you.

Let me see what I can do to improve the readability.

@@ -22,6 +31,8 @@ def __init__(self, gid, pos, cell_name='Basket'):
# for height and xz plane for depth. This is opposite for model as a
# whole, but convention is followed in this function ease use of gui.
self.shape_soma()
self.synapses = dict()
self.sect_loc = dict(proximal=['soma'], distal=[])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.sect_loc shouldn't be defined at the BasketSingle level, but rather separately for L2Basket and L5Basket respectively.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reason this is working without throwing any test errors is because by forcing all feed_loc variables to be proximal, we're forcing parreceive_ext to attempt running self._connect_feed_at_loc() if the layer-specific basket cell is defined in p_src.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Moving forward, we may want to reduce the if-else clauses. They make the code really hard to follow and prone to bugs when changing. Thanks for carefully reviewing it.


# Check if NMDA params are defined in p_src
if 'L2Basket_nmda' in p_src.keys():
nc_dict_nmda = {
if f'{self.name}_nmda' in p_src.keys():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As in here (reference to #146 (comment)).

@jasmainak
Copy link
Collaborator Author

I pushed 1c71219. Let me know what you guys think. It doesn't change anything functionally -- just helps with the readability. Because the functionality is taken care through the if-else clause but let's reserve consolidating that for another PR.

Also, before you merge, do confirm that I got this right:

  • common inputs and evoked inputs can be both proximal and distal
  • gaussian and poisson inputs are always proximal

It's how the code in the pyramidal.py file appears to be, and I passed feed_loc in basket.py accordingly -- so the two files match. Just helps with clarity ... functionality as we know is taken care of by the if-else clause. I can see more scope for code reduction/improvement but reserving that for later.

@blakecaldwell
Copy link
Member

If build times are critical (for GUI I guess?), perhaps we could add a test to ensure that build time is always under x seconds?

I like the idea of a performance regression test. It can be easy to change something that is undetectable on a single core, but blows up during parallelization. I'd actually like to do this for network building and simulations.

@rythorpe
Copy link
Contributor

I pushed 1c71219. Let me know what you guys think. It doesn't change anything functionally -- just helps with the readability. Because the functionality is taken care through the if-else clause but let's reserve consolidating that for another PR.

This looks really good, thanks @jasmainak.

Also, before you merge, do confirm that I got this right:

  • common inputs and evoked inputs can be both proximal and distal

Yes.

  • gaussian and poisson inputs are always proximal

This is true per create_pext() in both hnn.paramrw and hnn_core.params`, however, we may want to rephrase this in future PRs. I'm not convinced the proximal vs. distal feed location was intended to apply to the gaussian or poisson input from a physiological model perspective.

@rythorpe rythorpe merged commit 9602bcc into jonescompneurolab:master Aug 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants