Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Add functionality to record somatic voltage from all cells in simulation #190

Merged
merged 37 commits into from Nov 13, 2020

Conversation

ntolley
Copy link
Contributor

@ntolley ntolley commented Sep 25, 2020

closes #181. Starting this PR to discuss how this should be implemented. At the moment I am considering a dictionary indexed by gids, the values would be an array of the somatic voltage recorded. Alternatively it could be implemented as its own object like the Spikes class.

Another point to consider is if we want to record and save voltages by default, or have it enabled when specified by the user.

@jasmainak
Copy link
Collaborator

jasmainak commented Sep 25, 2020

starting a PR after changing 3 lines, I think you're getting the hang of open source and open science now ;-)

umm ... the cleanest API would probably be to rename Spikes to something like Cell, then rename existing Cell to CellBuilder and then anything related to cells will be part of this Cell object: soma voltage, currents, spikes etc. In the future, it could also have geometry information and a plot function. Do you think it's doable by you or too much?

@jasmainak
Copy link
Collaborator

Just to elaborate a bit. You should be able to do:

>>> net = Network(...)
>>> net.cells[50].spikes
>>> net.plot_spikes()
>>> net.cells[50].vsoma
>>> plt.plot(net.cells[50].vsoma)

@ntolley
Copy link
Contributor Author

ntolley commented Sep 25, 2020

I'll give it a shot! I have a bit more free time in the coming weeks so ideally I can make decent progress.

@ntolley
Copy link
Contributor Author

ntolley commented Sep 26, 2020

First pass at storing voltages in the Spikes class. Everything seems to be working ok but I'll need to compare against the HNN-GUI recorded voltages. Here's an example the voltages from each cell class during the default simulation:
image

@ntolley
Copy link
Contributor Author

ntolley commented Sep 27, 2020

Implemented some basic slicing functionality for the spikes class (the next commit will allow for arbitrary lists/arrays) At the moment the slice operation returns a smaller Spikes object with the content filtered by gids that match the slice.

One consideration is that indexing gids that do not exist will return an empty list. Considering the structure of spikes._times and similar attributes I think this makes the most sense. However I'm wondering if a warning should be raised if slice produces a completely empty slice object.

@codecov-commenter
Copy link

Codecov Report

Merging #190 into master will increase coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #190      +/-   ##
==========================================
+ Coverage   68.07%   68.08%   +0.01%     
==========================================
  Files          19       19              
  Lines        2039     2040       +1     
==========================================
+ Hits         1388     1389       +1     
  Misses        651      651              
Impacted Files Coverage Δ
hnn_core/network.py 66.27% <100.00%> (+0.20%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7097b52...a8c6e88. Read the comment docs.

@ntolley
Copy link
Contributor Author

ntolley commented Oct 5, 2020

Well this is no fun, the test Travis is failing on seems to run fine on my machine. In any case I restructured storage of vsoma to mimic cell.current by setting up recordings when the cell is instantiated. In the future it may be worth moving cell.current data to the proposed CellResponse class, however the previous MPI errors may reflect an issue with creating h.Vector() recordings in the wrong scope.

@rythorpe @jasmainak If you have the chance can you try running test_compare_across_backends() from your machine? If there is an error it will likely read "MPI simulation didn't return any data"

In the mean time I may push some variants on handling somatic voltages to try and appease Travis/MPI.

@jasmainak
Copy link
Collaborator

yep I could replicate. The problem is that the you don't have mpi4py installed, so you're not able to replicate. Can you do:

import mpi4py

in the ipython terminal? Also try putting a breakpoint (e.g., just add a line saying "1/0") in your test, and run it with "--pdb" flag and try doing "import mpi4py". I think it won't work.

Now the issue is the following -- we don't get a message that the test was skipped when mpi4py is not installed. I thought this was fixed but apparently we didn't do the proper fix when I looked at it. We should do something like this so the entire test is skipped when mpi4py is not installed. @ntolley want to give a shot at fixing that first? Basically, copy these lines and then rebase this pull request once that's merged into master

@jasmainak
Copy link
Collaborator

also cc-ing @blakecaldwell since he's the MPI expert :-)

@jasmainak
Copy link
Collaborator

By the way, just for your info. When you have something finicky that you can't replicate on your box, there is also the option of ssh-ing into the Travis build and trying there. It's a pain (because Travis box won't have a bunch of stuff installed) but you can use it as a last resort.

@blakecaldwell
Copy link
Member

I thought this was fixed but apparently we didn't do the proper fix when I looked at it.

The way I did it preserves comparing results when the joblib backend is used. Maybe you want to split this test up like it was before e847137#diff-c85e4dc16a1586adb223901f0de74fdf

@blakecaldwell
Copy link
Member

@ntolley I found a problem in how the results are being passed between mpi_child.py and MPIBackend.simulate(). This is a failure of my code to correctly pad the data. So I'll come up with a fix in new PR and you can rebase? It seems that tests past once I fix this.

@jasmainak since you have a specific plan in mind for how the tests should look, what about taking that on in a future PR?

@blakecaldwell
Copy link
Member

blakecaldwell commented Oct 6, 2020

While #192 is still open, you can work around this by

diff --git a/hnn_core/mpi_child.py b/hnn_core/mpi_child.py
index 5d02ab85..ec5d34b7 100644
--- a/hnn_core/mpi_child.py
+++ b/hnn_core/mpi_child.py
@@ -92,6 +92,7 @@ def run_mpi_simulation():
                                         'base64')
 
         data_iostream.write(repickled_bytes + b"===")
+        data_iostream.write(repickled_bytes + b"====")
 
     # flush anything in stderr (still points to str_err) to stdout
     sys.stderr.flush()

@ntolley
Copy link
Contributor Author

ntolley commented Oct 6, 2020

Thanks @blakecaldwell!

@jasmainak to finish off this PR I'll work on cleaning up indexing the Spikes object. Depending on how long #192 takes to merge I may start work on storing somatic currents in the same format.

Keeping the integration timeline in mind, do you think the proposed test PR can be merged afterwards?

@rythorpe
Copy link
Contributor

rythorpe commented Oct 6, 2020

I thought this was fixed but apparently we didn't do the proper fix when I looked at it.

The way I did it preserves comparing results when the joblib backend is used. Maybe you want to split this test up like it was before e847137#diff-c85e4dc16a1586adb223901f0de74fdf

Just to provide more context, the commit that combined running both backends under a single test was designed to speed up and simplify the test. The idea was that each backend needed two simulations: one for comparing to the default 'ground truth' simulation and one for comparing consistency between the other backends. It looks like we'll need to rethink this, however, given the mpi2py import issue. Maybe we can also consider shortening the default 'ground truth' simulation in order to speed up the tests that require simulations.

@jasmainak
Copy link
Collaborator

jasmainak commented Oct 7, 2020

we can use pytest fixtures to avoid duplicating the same test.

@jasmainak
Copy link
Collaborator

Keeping the integration timeline in mind, do you think the proposed test PR can be merged afterwards?

I would say bring this PR to a mergeable state as soon as you can. I would also like to work on other PRs which could create merge conflicts and slow things down. Adding the currents should be easy once we have this working, no?

@ntolley
Copy link
Contributor Author

ntolley commented Oct 9, 2020

Yeah adding currents would work nearly identically as somatic voltage. A cleaner solution might actually be to combine aggregate_currents() and aggregate_voltages() functions. This is more complicated due to how the dipole is calculated (summation occurs implicitly in the parallel call), but shouldn't be too difficult.

@blakecaldwell
Copy link
Member

@ntolley I'd agree with making the code as simple as possible first, then let's see if it's necessary to add back any optimizations

@ntolley
Copy link
Contributor Author

ntolley commented Oct 15, 2020

Will dig into the Travis error tomorrow. The error message is actually being printed now since the voltage data isn't passed through stderr, but I am unsure if it is related to how I am storing voltage data or some other issue:
image

In any case, I've flushed out __get_item__() function for the Spikes class which allows indexing by gids. @jasmainak @rythorpe as we discussed some more thought will need to go into how we want users to interact with the fully fledged CellResponse class, but I think this is a good start.

Once the Travis error is resolved I'd be happy to merge, please feel free to suggest changes to the gid indexing or voltage recording.

@jasmainak
Copy link
Collaborator

should we remove this example: https://jonescompneurolab.github.io/hnn-core/auto_examples/plot_firing_pattern.html#sphx-glr-auto-examples-plot-firing-pattern-py

and instead use your function to plot the soma voltage?

@ntolley
Copy link
Contributor Author

ntolley commented Nov 13, 2020

Finished adding tests for the dimensions of that vsoma data. The MPI fix seems to be running flawlessly, thanks for all the hard work @blakecaldwell!


simulate_dipole(net, n_trials=2, record_vsoma=True)
assert len(net.spikes.vsoma) == 2
assert len(net.spikes.vsoma[0]) == 24
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to merge this, but could you clarify where 24 comes from?

Copy link
Contributor Author

@ntolley ntolley Nov 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just the length of the recording. I figure hard coding is preferable in this situation since the length of all the recording vectors is derived from spikes.times. For example, it'd be useful to know if a future commit somehow changes the length of the simulation/recording.

Copy link
Collaborator

@jasmainak jasmainak Nov 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see ... you could have also used tstop and dt from params and recomputed the length, I guess. Anyway, no strong feelings either way

@jasmainak jasmainak merged commit 56f3565 into jonescompneurolab:master Nov 13, 2020
@jasmainak
Copy link
Collaborator

jasmainak commented Nov 13, 2020

Thanks a bunch @ntolley for the PR and for your patience ! 🍻 🍺

let's keep the ball rolling in #199

Comment on lines +53 to +58
if record_vsoma is not None and isinstance(record_vsoma, bool):
net.params['record_vsoma'] = record_vsoma
else:
raise TypeError("record_vsoma must be bool, got %s"
% type(record_vsoma).__name__)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ntolley @jasmainak! I had a followup on this PR. Enforcing a bool here causes an issue with how the HNN GUI displays parameter values. Normally, the user enters 0 or 1, which gets saved to the params dict. Here, it changes that value and the GUI shows "False", which is not suitable for a text box. I could change the GUI to automatically reconvert to 0 or 1 pretty easily, but this seems to be the first parameter hnn-core enforces. Is that the convention you are aiming for?

BTW, I realize a better GUI solution would be checkboxes but am kind of hoping hnn-core will allow 0 or 1 and I can leave the code as-is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd defer to @jasmainak's opinion for this one, but I but I believe this has come up in the past and the decision was to avoids ints for boolean values.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, we're aiming to have this to be boolean in hnn-core. Do you mind type-casting it in HNN for now? We can convert to checkboxes in the future!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, thanks for the clarification

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement mechanism to record somatic voltages for all cells in a simulation
6 participants