Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

postprocessing #188

Merged
merged 19 commits into from Nov 30, 2020
Merged

postprocessing #188

merged 19 commits into from Nov 30, 2020

Conversation

kohl-carmen
Copy link
Contributor

@kohl-carmen kohl-carmen commented Sep 24, 2020

Closes #183
closes #172

@jasmainak
Copy link
Collaborator

@blakecaldwell do you mind taking a look to see if it addresses what you had in mind for #183 ?

@kohl-carmen one thing that remains to be checked is what does the GUI output as rawdpl.txt, if it's baseline renormalized or not?

@jasmainak
Copy link
Collaborator

another thing to do here is to add a test for the new postproc parameter.

@@ -19,7 +19,6 @@

class Pyr(_Cell):
"""Pyramidal neuron.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes are unrelated to the PR, so best to undo them as we discussed!

@blakecaldwell
Copy link
Member

Just getting around to reviewing this. Sorry for the delay. Yes, this is looking very good and should resolve #183 with some minor updates. We briefly touched on this on Monday, but I have some specific suggestions now.

Since HNN GUI needs to write both rawdpl.txt and dpl.txt, I think this could be done by making a postproc() member of Dipole that gets called in simulate when post_proc=True. This would allow HNN GUI to call simulate_dipole(..., post_proc=False), write rawdpl.txt, and then dpl.postproc(params) before writing the dpl.txt file.

def postproc(self, params):
    dpl.baseline_renormalize(params)
    dpl.convert_fAm_to_nAm()
    dpl.scale(params['dipole_scalefctr'])
    dpl.smooth(params['dipole_smooth_win'] / params['dt'])

Note, the relevant params could be passed instead of the dictionary, perhaps preferable? I didn't look up what params baseline_renormalize needs.

Comment on lines 117 to 122
if neuron_net.net.params['save_dpl']:
dpl.write('rawdpl.txt')

dpl.baseline_renormalize(neuron_net.net.params)
dpl.convert_fAm_to_nAm()
dpl.scale(neuron_net.net.params['dipole_scalefctr'])
dpl.smooth(neuron_net.net.params['dipole_smooth_win'] / h.dt)
if postproc:
dpl.scale(neuron_net.net.params['dipole_scalefctr'])
dpl.smooth(neuron_net.net.params['dipole_smooth_win'] / h.dt)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

        if postproc:
            self.postproc(params)

@jasmainak
Copy link
Collaborator

I see, I would make it _postproc so it's a private method used only in the GUI. Don't want to encourage the pattern of users calling foo_function(params) from their Python scripts. Alternatively, if it makes sense, as you suggested, we can have explicitly the parameters for all the postprocessing methods ...

@kohl-carmen I saw you pushed some new commits. Let us know when we should take a look again. I get notifications only for comments, not for commits.

@blakecaldwell
Copy link
Member

blakecaldwell commented Oct 1, 2020

I agree with using specific parameters. It somewhat negates the point below, but I think it's worth pointing out.

I blanket disagree with making functions that HNN use private. HNN should be considered a first-class use case of hnn-core.

@jasmainak
Copy link
Collaborator

Sure, of course HNN is a first class citizen of hnn-core. However, the reason I suggested using a private method is that it gives us flexibility to change the internals. If users start using hnn-core by passing params, they will complain if we try to change it in the future.

@jasmainak jasmainak added this to the 0.1 milestone Oct 9, 2020
@codecov-io
Copy link

codecov-io commented Oct 22, 2020

Codecov Report

Merging #188 (b304836) into master (bc8e4f2) will decrease coverage by 0.65%.
The diff coverage is 73.40%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #188      +/-   ##
==========================================
- Coverage   68.16%   67.50%   -0.66%     
==========================================
  Files          21       21              
  Lines        2202     2219      +17     
==========================================
- Hits         1501     1498       -3     
- Misses        701      721      +20     
Impacted Files Coverage Δ
hnn_core/network_builder.py 93.93% <ø> (+0.17%) ⬆️
hnn_core/parallel_backends.py 15.31% <16.66%> (-0.81%) ⬇️
hnn_core/dipole.py 23.65% <28.57%> (-29.68%) ⬇️
hnn_core/tests/conftest.py 80.48% <100.00%> (+24.93%) ⬆️
hnn_core/tests/test_dipole.py 100.00% <100.00%> (ø)
hnn_core/tests/test_parallel_backends.py 100.00% <100.00%> (+26.31%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bc8e4f2...b304836. Read the comment docs.

params : dict
The parameters
N_pyr_x : int
Nr of cells (x)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Nr of cells (x)
Number of Pyramidal cells in x direction

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add this change in next commit?

@jasmainak
Copy link
Collaborator

@kohl-carmen can you update a test?

@kohl-carmen
Copy link
Contributor Author

@klankinen and I tried to fix the postproc test. We set the std of the inputs to 0, rather than changing the tolerance on assert_allclose() because we didn't know what a reasonable tolerance would be?

The postproc test is currently still in test_dipole.py but we're happy to move it (might need some help)

@rythorpe
Copy link
Contributor

rythorpe commented Nov 19, 2020

@klankinen and I tried to fix the postproc test. We set the std of the inputs to 0, rather than changing the tolerance on assert_allclose() because we didn't know what a reasonable tolerance would be?

Hmm, setting the std of the input times to 0 won't control for error induced by smoothing vs. not smoothing a given trial. My recommendation would be leave jitter in-between trials (i.e., reset start time std values to default) and solve for a reasonable tolerance as follows:

  1. Set the smoothing kernel to be super narrow such that it only averages a few time samples surrounding the central time sample.

  2. Given that the smoothing kernel we use is a hamming window, the maximal difference between the original central time sample and the smoothed version cannot exceed the difference between the original central time sample and the mean of all samples contained in the smoothing window.

  3. Use (2) to compute the maximal relative difference (i.e., tolerance) between the post-processed time series and the original time series.

This method might be overkill but it should only take a few lines of code and can serve as a sanity check for any subsequent changes made to the post-processing steps in later PRs.

The postproc test is currently still in test_dipole.py but we're happy to move it (might need some help)

Idk, @jasmainak @ntolley what do you think?

@jasmainak
Copy link
Collaborator

If you guys are working together you can both get credit for commits by co-authoring them. Just wanted to put it out there for future commits. You can also edit older commits and add authors but probably not worth the effort unless @klankinen insists ;-)

@jasmainak
Copy link
Collaborator

because we didn't know what a reasonable tolerance would be?

Something reasonably small ... Is there a reason they are not exactly equal? Usually the defaults should do but if not you can look around in the codebase for tolerance used.

@@ -186,21 +194,21 @@ def plot(self, ax=None, layer='agg', show=True):
"""
return plot_dipole(dpl=self, ax=ax, layer=layer, show=show)

def baseline_renormalize(self, params):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lovely!

Comment on lines 128 to 168
if postproc:
N_pyr_x = net.params['N_pyr_x']
N_pyr_y = net.params['N_pyr_y']
winsz = net.params['dipole_smooth_win'] / net.params['dt']
fctr = net.params['dipole_scalefctr']
for idx in range(n_trials):
dpls[idx].post_proc(N_pyr_x, N_pyr_y, winsz, fctr)
Copy link
Collaborator

@jasmainak jasmainak Nov 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think we could avoid the duplication of code between the backends by putting this inside _gather_trial_data with postproc as an argument?

@jasmainak
Copy link
Collaborator

jasmainak commented Nov 19, 2020

The postproc test is currently still in test_dipole.py but we're happy to move it (might need some help)

ideally, we should avoid duplication but let's get the other stuff working first

also you need to rebase. Do you need help? I think @ntolley is an expert ;-)

last thing is you need to update whats_new.rst. Do it last to avoid having to rebase again (the file gets updated in almost every PR).

@ntolley
Copy link
Contributor

ntolley commented Nov 20, 2020

On first glance it looks like the rebase should be pretty easy, I think most of the conflicts can be resolved by accepting both changes (one exception is adding the new argument to simulate_dipole()). Feel free to ping me if you have any questions.

@rythorpe
Copy link
Contributor

Something reasonably small ... Is there a reason they are not exactly equal? Usually the defaults should do but if not you can look around in the codebase for tolerance used.

This probably won't work because the error here is induced by rounding error rather than by convolution with a smoothing kernel.

@jasmainak
Copy link
Collaborator

rather than by convolution with a smoothing kernel.

not sure I follow. There are two tests: 1) checking if dpl_raw and dpl are different. and 2) checking if dpl_raw.post_proc(...) gives the same result as simulate_dipole(..., post_proc=True). It's doing the same arithmetic unless I'm missing something?

@rythorpe
Copy link
Contributor

not sure I follow. There are two tests: 1) checking if dpl_raw and dpl are different. and 2) checking if dpl_raw.post_proc(...) gives the same result as simulate_dipole(..., post_proc=True). It's doing the same arithmetic unless I'm missing something?

Oops, somehow I missed test_dipole.py L67. For some reason I thought the goal was to simultaneously test the scope of the post-processing.

kohl-carmen and others added 7 commits November 29, 2020 13:33
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
jasmainak and others added 5 commits November 29, 2020 13:40
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
Co-authored-by: Kaisu Lankinen <klankinen@mgh.harvard.edu>
@pytest.fixture(scope='module')
def run_hnn_core():
def _run_hnn_core(backend=None, n_procs=None, n_jobs=1, reduced=False,
record_vsoma=False, record_isoma=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kohl-carmen if you add a postproc parameter here, then you can avoid duplication of code

Copy link
Collaborator

@jasmainak jasmainak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR is good for me now! I snuck in a couple of fixes related to requires_mpi4py and moving run_hnn_core into a pytest fixture.

@ntolley @blakecaldwell @rythorpe if you guys are happy please feel free to merge and/or leave comments if something is to be fixed

assert len(joblib_net.cell_response.isoma[
trial][gid]['soma_gabaa']) == n_times

@requires_mpi4py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I didn't even realize we had a MPIBackend test in another file besides parallel_backends.py. Thanks @jasmainak for making the testing changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah those would be my changes, thanks for fixing my ad-hoc MPI tests @jasmainak.

Copy link
Member

@blakecaldwell blakecaldwell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just have a few suggestions, mainly in the new testing code added. So close!

hnn_core/tests/test_dipole.py Outdated Show resolved Hide resolved


@pytest.fixture(scope='module')
def run_hnn_core():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about putting "fixture" in this function name? Otherwise it looks like a mysterious parameter to other testing functions.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 @kohl-carmen can you make the changes yourself?

params : dict
The parameters
N_pyr_x : int
Nr of cells (x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add this change in next commit?

Copy link
Member

@blakecaldwell blakecaldwell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! Thanks @kohl-carmen

@blakecaldwell
Copy link
Member

@jasmainak would you do a "squash and merge" or an automatic rebase to merge this PR? Not sure what your preference is.

@jasmainak
Copy link
Collaborator

I would do a "rebase and merge". Would give @kohl-carmen the full credit for the effort over the past month. Once we have establish set of contributors, we can move to "squash and merge" model

@blakecaldwell blakecaldwell merged commit fa9c666 into jonescompneurolab:master Nov 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

access raw dipole for each trial mpi4py installation
6 participants