New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add correlate_template() function with 'full' normalization; deprecate 'domain' keyword in correlate() in favor of 'method' keyword #2042

Merged
merged 38 commits into from Jan 23, 2018

Conversation

Projects
None yet
4 participants
@calum-chamberlain
Contributor

calum-chamberlain commented Jan 12, 2018

What does this PR do?

This PR adds full normalisation to cross-correlations as discussed in #2035.

I have based this on Master given that it was flagged for 1.2.0 in #2035, and it isn't strictly a bug-fix.

I still need to work out a good way to test for rounding errors - the usual case I use is to download some data from the 2016 M7.8 Kaikoura earthquake as this has enough range to result in floating point errors for many correlation implementations when using a day of data. How do we feel about adding an extra network test that is quite memory intensive...?

TODO list:

  • Add tests for all versions of normalize/demean arguments;
  • Find source of differences between EQcorrscan result and obspy
  • Allow zero-normalized and normalized in _normxcorr using the demean argument;
    - [ ] (Possibly) Add network test with Kaikoura dataset (passes locally currently, but unsure how to do it non-locally - currently comparing to EQcorrscan result computed on-the-fly, which would require an EQcorrscan install too...)

PR Checklist

  • Correct base branch selected? master for new fetures, maintenance_... for bug fixes
  • This PR is not directly related to an existing issue (which has no PR yet).
  • If the PR is making changes to documentation, docs pages can be built automatically.
    Just remove the space in the following string after the + sign: "+DOCS"
  • If any network modules should be tested for the PR, add them as a comma separated list
    (e.g. clients.fdsn,clients.arclink) after the colon in the following magic string: "+TESTS:"
    (you can also add "ALL" to just simply run all tests across all modules)
  • All tests still pass.
  • Any new features or fixed regressions are be covered via new tests.
  • Any new or changed features have are fully documented.
  • Significant changes have been added to CHANGELOG.txt .
  • First time contributors have added your name to CONTRIBUTORS.txt .

@calum-chamberlain calum-chamberlain added this to the 1.2.0 milestone Jan 12, 2018

@calum-chamberlain calum-chamberlain self-assigned this Jan 12, 2018

@krischer

Looks pretty good to me. I think someone else should still review the cross correlation normalization as I'm not at all an expert in this.

Can you add a small test case that tests all possibilities of the normalize argument? There are a lot now and it will be easy to have a regression in future changes if this is not explicitly tested. A mock test might work well that checks with functions are being called?

Show outdated Hide outdated obspy/signal/cross_correlation.py
Show outdated Hide outdated obspy/signal/cross_correlation.py
Show outdated Hide outdated obspy/signal/cross_correlation.py
Show outdated Hide outdated obspy/signal/cross_correlation.py
Show outdated Hide outdated obspy/signal/tests/test_cross_correlation.py
@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 15, 2018

Member

This looks also good to me. Here are my comments after a first superficial review:

  • In the test you added, values of cc are all lower than results from EQcorrscan. Also the maximum correlation is a bit lower than 1 which is not expected. You compensate this by setting a very high atol. Maybe you missed something in the algorithm? Could you also change the test a bit, so that the shift does not equal zero?
  • If len(a) == len(b) the output of correlate should be the same for demean=True, normalize='naive' and normalize='full'. Am I correct? If yes, you could add a simple test checking it.
  • The algorithm only works for len(a) >= len(b). Could you please generalize by switching variables and mirroring the result for len(a) < len(b)? (+test)
  • I think it would be nice, if the demean parameter could be passed to _normxcorr. Depending on its value _normxcorr could perform zero-normalized or normalized cross-correlation
  • The test for rounding errors should definitely be added. If you need the real data, the test could be skipped by default and only run if triggered. (How to implement this exactly I don't know. We have to ask @krischer or @megies)
Member

trichter commented Jan 15, 2018

This looks also good to me. Here are my comments after a first superficial review:

  • In the test you added, values of cc are all lower than results from EQcorrscan. Also the maximum correlation is a bit lower than 1 which is not expected. You compensate this by setting a very high atol. Maybe you missed something in the algorithm? Could you also change the test a bit, so that the shift does not equal zero?
  • If len(a) == len(b) the output of correlate should be the same for demean=True, normalize='naive' and normalize='full'. Am I correct? If yes, you could add a simple test checking it.
  • The algorithm only works for len(a) >= len(b). Could you please generalize by switching variables and mirroring the result for len(a) < len(b)? (+test)
  • I think it would be nice, if the demean parameter could be passed to _normxcorr. Depending on its value _normxcorr could perform zero-normalized or normalized cross-correlation
  • The test for rounding errors should definitely be added. If you need the real data, the test could be skipped by default and only run if triggered. (How to implement this exactly I don't know. We have to ask @krischer or @megies)
@calum-chamberlain

This comment has been minimized.

Show comment
Hide comment
@calum-chamberlain

calum-chamberlain Jan 16, 2018

Contributor

Thanks for those reviews @trichter and @krischer, I'm making some changes down here and have fixed the things I have replied to. Agree that results are not quite as expected and I need to dig into this a little more.
For now I will push up the changes that I have made with tests deliberately failing on the comparison of normalized results - I can ping you @trichter again when I have found the cause of the differences if you want?

Contributor

calum-chamberlain commented Jan 16, 2018

Thanks for those reviews @trichter and @krischer, I'm making some changes down here and have fixed the things I have replied to. Agree that results are not quite as expected and I need to dig into this a little more.
For now I will push up the changes that I have made with tests deliberately failing on the comparison of normalized results - I can ping you @trichter again when I have found the cause of the differences if you want?

@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 16, 2018

Member

I meant something else with the one comment. In line 155

np.divide(move_mean, len(long_data), out=move_mean)

you have to substitute len(long_data) with len(short_data) and some problems will go away.

Also, I think I am not correct with the point, that the correlations with different normalizations are the same for len(a)==len(b). But you can check that the full_xcorr == naive_xcorr for the mid sample and that full_xcorr > naive_xcorr for all other samples.

And last but not least here is another input: correlate was designed to return a cross-correlation "around lag time 0". I suspect when you do template matching you are a bit surprised to specify the shift parameter and also you want to have a correlation which has the same length as your data vector. I have two suggestions to deal with this:

  1. make shift optional and/or accepting strings like 'same' and 'valid'.
  2. introduce a completely new function correlate_template(data, template, mode=, demean=, normalize=)
    What are your thoughts/opinions?
Member

trichter commented Jan 16, 2018

I meant something else with the one comment. In line 155

np.divide(move_mean, len(long_data), out=move_mean)

you have to substitute len(long_data) with len(short_data) and some problems will go away.

Also, I think I am not correct with the point, that the correlations with different normalizations are the same for len(a)==len(b). But you can check that the full_xcorr == naive_xcorr for the mid sample and that full_xcorr > naive_xcorr for all other samples.

And last but not least here is another input: correlate was designed to return a cross-correlation "around lag time 0". I suspect when you do template matching you are a bit surprised to specify the shift parameter and also you want to have a correlation which has the same length as your data vector. I have two suggestions to deal with this:

  1. make shift optional and/or accepting strings like 'same' and 'valid'.
  2. introduce a completely new function correlate_template(data, template, mode=, demean=, normalize=)
    What are your thoughts/opinions?
@krischer

This comment has been minimized.

Show comment
Hide comment
@krischer

krischer Jan 16, 2018

Member
  1. make shift optional and/or accepting strings like 'same' and 'valid'.
  2. introduce a completely new function correlate_template(data, template, mode=, demean=, normalize=)
    What are your thoughts/opinions?

I don't have strong opinions about this but I slightly prefer the first version. But if you think this use case warrants a second function I'd personally also be fine with it.

Member

krischer commented Jan 16, 2018

  1. make shift optional and/or accepting strings like 'same' and 'valid'.
  2. introduce a completely new function correlate_template(data, template, mode=, demean=, normalize=)
    What are your thoughts/opinions?

I don't have strong opinions about this but I slightly prefer the first version. But if you think this use case warrants a second function I'd personally also be fine with it.

@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 16, 2018

Member

I think I slightly prefer the second option.

OK, I've been doing some coding on the _normxcorr function and I finally came up with:

def _pad_zeros(a, num, num2=None):
    """Pad num zeros at both sides of array a"""
    if num2 is None:
        num2 = num
    hstack = [np.zeros(num), a, np.zeros(num2)]
    return np.hstack(hstack)

def _window_sum(data, window_len):
    window_sum = np.cumsum(data)
    window_sum = window_sum[window_len:] - window_sum[:-window_len]
    return window_sum

def correlate_template(data, template, mode='valid', demean=True, normalize='full', domain='freq'):
    N = len(template)
    assert len(data) >= N
    if demean:
        template = template - np.mean(template)
        data = data - np.mean(data)
    if normalize is not None:
        template = template / (np.sum(template ** 2)) ** 0.5
    if domain == 'time':
        c = scipy.signal.correlate(data, template, mode=mode)
    else:
        c = scipy.signal.fftconvolve(data, template[::-1], mode=mode)
    if normalize is None:
        denominator = 1
    elif normalize == 'naive':
        denominator = (np.sum(data ** 2)) ** 0.5
        if denominator == 0.:
            denominator = 1
    elif normalize == 'full':
        pad = len(c) - len(data) + N
        if mode == 'same':
            pad1, pad2 = (pad + 2) // 2, (pad-1) // 2
        else:
            pad1, pad2 = (pad + 1) // 2, pad // 2
        data = _pad_zeros(data, pad1, pad2)
        if demean:
            denominator = (_window_sum(data ** 2, N) - _window_sum(data, N) ** 2 / N) ** 0.5
        else:
            denominator = (_window_sum(data ** 2, N)) ** 0.5
        denominator[denominator == 0.] = 1
    return c / denominator

Did not push to this brach, because I didn't want to overwrite @calum-chamberlain 's code.

Edited code block.

Member

trichter commented Jan 16, 2018

I think I slightly prefer the second option.

OK, I've been doing some coding on the _normxcorr function and I finally came up with:

def _pad_zeros(a, num, num2=None):
    """Pad num zeros at both sides of array a"""
    if num2 is None:
        num2 = num
    hstack = [np.zeros(num), a, np.zeros(num2)]
    return np.hstack(hstack)

def _window_sum(data, window_len):
    window_sum = np.cumsum(data)
    window_sum = window_sum[window_len:] - window_sum[:-window_len]
    return window_sum

def correlate_template(data, template, mode='valid', demean=True, normalize='full', domain='freq'):
    N = len(template)
    assert len(data) >= N
    if demean:
        template = template - np.mean(template)
        data = data - np.mean(data)
    if normalize is not None:
        template = template / (np.sum(template ** 2)) ** 0.5
    if domain == 'time':
        c = scipy.signal.correlate(data, template, mode=mode)
    else:
        c = scipy.signal.fftconvolve(data, template[::-1], mode=mode)
    if normalize is None:
        denominator = 1
    elif normalize == 'naive':
        denominator = (np.sum(data ** 2)) ** 0.5
        if denominator == 0.:
            denominator = 1
    elif normalize == 'full':
        pad = len(c) - len(data) + N
        if mode == 'same':
            pad1, pad2 = (pad + 2) // 2, (pad-1) // 2
        else:
            pad1, pad2 = (pad + 1) // 2, pad // 2
        data = _pad_zeros(data, pad1, pad2)
        if demean:
            denominator = (_window_sum(data ** 2, N) - _window_sum(data, N) ** 2 / N) ** 0.5
        else:
            denominator = (_window_sum(data ** 2, N)) ** 0.5
        denominator[denominator == 0.] = 1
    return c / denominator

Did not push to this brach, because I didn't want to overwrite @calum-chamberlain 's code.

Edited code block.

@krischer

This comment has been minimized.

Show comment
Hide comment
@krischer

krischer Jan 16, 2018

Member

I think I slightly prefer the second option.

Then go for it.

I'll let you and @calum-chamberlain sort out the details of this branch :)

Member

krischer commented Jan 16, 2018

I think I slightly prefer the second option.

Then go for it.

I'll let you and @calum-chamberlain sort out the details of this branch :)

@calum-chamberlain

This comment has been minimized.

Show comment
Hide comment
@calum-chamberlain

calum-chamberlain Jan 16, 2018

Contributor

I meant something else with the one comment. In line 155

np.divide(move_mean, len(long_data), out=move_mean)

you have to substitute len(long_data) with len(short_data) and some problems will go away.

So that would work, but should be less memory efficient for the domain='time' option, because the short data would be padded to the length of the long data - for correlations in the frequency domain this is done anyway I think, so it would make no difference there. I would prefer to maintain it the way I have it now (with setting the shorter of (a, b) as short_data, but I should probably document this behaviour and say something like:

"""
    :note:
         For ``normalization='full'`` the shorter of ``(a, b)`` will be slid through the long 
         data, such that the zeroth element of the returned array corresponds to the
         correlation of the short input array with the first n elements of the long input 
         array where n is the length of the short input array.
"""

Opinions welcome on that, sounds a little opaque to me.


Also, I think I am not correct with the point, that the correlations with different normalizations are the same for len(a)==len(b). But you can check that the full_xcorr == naive_xcorr for the mid sample and that full_xcorr > naive_xcorr for all other samples.

Yup, have changed that test to shift=0 locally, happy days.


  1. make shift optional and/or accepting strings like 'same' and 'valid'.
  2. introduce a completely new function correlate_template(data, template, mode=, demean=, normalize=)
    What are your thoughts/opinions?

I mildly prefer option 2 as well, but with some doc-string in signal.cross_correlation.correlate pointing users to the right place. I'm not keen on giving the user the option of naive normalization with template-matching though, mostly I think it opens up ways for people to get very wrong answers.


@trichter how do you want to proceed? Happy to reformat my stuff here with the correlate_template function you suggested, or do you want me to push the minor changes I made here so you can add that change?

Contributor

calum-chamberlain commented Jan 16, 2018

I meant something else with the one comment. In line 155

np.divide(move_mean, len(long_data), out=move_mean)

you have to substitute len(long_data) with len(short_data) and some problems will go away.

So that would work, but should be less memory efficient for the domain='time' option, because the short data would be padded to the length of the long data - for correlations in the frequency domain this is done anyway I think, so it would make no difference there. I would prefer to maintain it the way I have it now (with setting the shorter of (a, b) as short_data, but I should probably document this behaviour and say something like:

"""
    :note:
         For ``normalization='full'`` the shorter of ``(a, b)`` will be slid through the long 
         data, such that the zeroth element of the returned array corresponds to the
         correlation of the short input array with the first n elements of the long input 
         array where n is the length of the short input array.
"""

Opinions welcome on that, sounds a little opaque to me.


Also, I think I am not correct with the point, that the correlations with different normalizations are the same for len(a)==len(b). But you can check that the full_xcorr == naive_xcorr for the mid sample and that full_xcorr > naive_xcorr for all other samples.

Yup, have changed that test to shift=0 locally, happy days.


  1. make shift optional and/or accepting strings like 'same' and 'valid'.
  2. introduce a completely new function correlate_template(data, template, mode=, demean=, normalize=)
    What are your thoughts/opinions?

I mildly prefer option 2 as well, but with some doc-string in signal.cross_correlation.correlate pointing users to the right place. I'm not keen on giving the user the option of naive normalization with template-matching though, mostly I think it opens up ways for people to get very wrong answers.


@trichter how do you want to proceed? Happy to reformat my stuff here with the correlate_template function you suggested, or do you want me to push the minor changes I made here so you can add that change?

@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 17, 2018

Member

I meant something else with the one comment. In line 155
np.divide(move_mean, len(long_data), out=move_mean)
you have to substitute len(long_data) with len(short_data) and some problems will go away.

So that would work, but should be less memory efficient for the domain='time' option, because the short data would be padded to the length of the long data

Still, we are talking about different issues , here. It just has to be replaced in this one line to get the algorithm correct.

@trichter how do you want to proceed? Happy to reformat my stuff here with the correlate_template function you suggested, or do you want me to push the minor changes I made here so you can add that change?

OK, I am going to refactor everything a bit and push my changes to your branch.

Member

trichter commented Jan 17, 2018

I meant something else with the one comment. In line 155
np.divide(move_mean, len(long_data), out=move_mean)
you have to substitute len(long_data) with len(short_data) and some problems will go away.

So that would work, but should be less memory efficient for the domain='time' option, because the short data would be padded to the length of the long data

Still, we are talking about different issues , here. It just has to be replaced in this one line to get the algorithm correct.

@trichter how do you want to proceed? Happy to reformat my stuff here with the correlate_template function you suggested, or do you want me to push the minor changes I made here so you can add that change?

OK, I am going to refactor everything a bit and push my changes to your branch.

trichter added some commits Jan 17, 2018

trichter added some commits Jan 19, 2018

@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 19, 2018

Member

I am finished for today. It's ready from my point of view. Let's see what CI and the docs buildbot say. Feel free to tune docs and code.

Member

trichter commented Jan 19, 2018

I am finished for today. It's ready from my point of view. Let's see what CI and the docs buildbot say. Feel free to tune docs and code.

@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 22, 2018

Member

Ready for review.

Member

trichter commented Jan 22, 2018

Ready for review.

@krischer

This is looking pretty good by now. I'll trust the two of you with the mathematical details and only have some very minor comments left.

Could also add a small regression test for normalize=True which I think is not covered by the current test suite.

One other small request I have: Could you add something about this PR to this page here (https://github.com/obspy/obspy/wiki/What's-New-in-ObsPy-1.2) - then we don't have to scramble to create this page at release time.

Show outdated Hide outdated CHANGELOG.txt
Show outdated Hide outdated CHANGELOG.txt
Show outdated Hide outdated CHANGELOG.txt
Show outdated Hide outdated obspy/signal/cross_correlation.py
Show outdated Hide outdated obspy/signal/cross_correlation.py
Show outdated Hide outdated obspy/signal/cross_correlation.py
>>> b = a[:-2]
>>> cc = correlate(a, b, 2)
>>> cc
array([ 0.62390515, 0.99630851, 0.62187106, -0.05864797, -0.41496995])

This comment has been minimized.

@krischer

krischer Jan 22, 2018

Member

I feel like this might be a bit fragile depending on the OS/scipy/version, ... but if CI passes I'm fine with for now. I'll try to get the docker bots online again this week.

@krischer

krischer Jan 22, 2018

Member

I feel like this might be a bit fragile depending on the OS/scipy/version, ... but if CI passes I'm fine with for now. I'll try to get the docker bots online again this week.

Show outdated Hide outdated obspy/signal/cross_correlation.py
Show outdated Hide outdated obspy/signal/cross_correlation.py
@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 22, 2018

Member

Thanks for the review! I implemented the hints. Because the test case is quite extensive, I dared to switch to in-place operations as much as possible.

Member

trichter commented Jan 22, 2018

Thanks for the review! I implemented the hints. Because the test case is quite extensive, I dared to switch to in-place operations as much as possible.

@megies

This comment has been minimized.

Show comment
Hide comment
@megies

megies Jan 22, 2018

Member

One other small request I have: Could you add something about this PR to this page here (https://github.com/obspy/obspy/wiki/What's-New-in-ObsPy-1.2) - then we don't have to scramble to create this page at release time.

we might want to add this to the github PR template checklist, for major changes to include themselves in the "whats new" page..

Member

megies commented Jan 22, 2018

One other small request I have: Could you add something about this PR to this page here (https://github.com/obspy/obspy/wiki/What's-New-in-ObsPy-1.2) - then we don't have to scramble to create this page at release time.

we might want to add this to the github PR template checklist, for major changes to include themselves in the "whats new" page..

@trichter trichter changed the title from Add 'full' normalized cross-correlations to Add correlate_template() function with 'full' normalization; deprecate 'domain' keyword in correlate() in favor of 'method' keyword Jan 22, 2018

@calum-chamberlain

This comment has been minimized.

Show comment
Hide comment
@calum-chamberlain

calum-chamberlain Jan 22, 2018

Contributor

Looks good to me, thank @trichter for your efforts here.

Contributor

calum-chamberlain commented Jan 22, 2018

Looks good to me, thank @trichter for your efforts here.

@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter
Member

trichter commented Jan 23, 2018

Thank you @calum-chamberlain

@krischer

This comment has been minimized.

Show comment
Hide comment
@krischer

krischer Jan 23, 2018

Member

Thanks a lot @calum-chamberlain and @trichter for all your work on this :)

Member

krischer commented Jan 23, 2018

Thanks a lot @calum-chamberlain and @trichter for all your work on this :)

@krischer krischer merged commit 716103a into master Jan 23, 2018

2 of 5 checks passed

docs-buildbot Build succeeded, but there are warnings/errors:
Details
continuous-integration/travis-ci/pr The Travis CI build is in progress
Details
docker-testbot docker testbot results not available yet
ci/circleci Your tests passed on CircleCI!
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details

@krischer krischer deleted the normalized_correlation branch Jan 23, 2018

@trichter

This comment has been minimized.

Show comment
Hide comment
@trichter

trichter Jan 24, 2018

Member

@calum-chamberlain Using your notebook I compared the performance of the new implementation with bottleneck. It is only marginally slower. The original numpy method failed with a memory error using data corresponding to 1 day sampled at 100Hz.

indeces = np.arange(0, 24*3600, 0.01) # A day of data sampled at 100 Hz
data = signal.gausspulse(indeces - 2000, fc=0.5)
data += 0.3 * np.random.randn(len(data))
template_start, template_stop = (198000, 202000) # Lets use a little longer template too, which means a longer rolling window
template = data[template_start: template_stop]
# no normalization
%timeit correlate_template(data, template, normalize=None)
%memit correlate_template(data, template, normalize=None)
1.07 s ± 4.18 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
peak memory: 967.45 MiB, increment: 532.99 MiB
# normalization with bottleneck
%timeit normxcorr(data, template, (len(data) // 2) - (len(template) // 2), demean=True, domain='freq', norm_method='bn')
%memit normxcorr(data, template, (len(data) // 2) - (len(template) // 2), demean=True, domain='freq', norm_method='bn')
1.16 s ± 7.52 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
peak memory: 1000.42 MiB, increment: 565.96 MiB
# normalization with np.cumsum
%timeit correlate_template(data, template)
%memit correlate_template(data, template)
1.36 s ± 7.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
peak memory: 901.53 MiB, increment: 467.07 MiB
Member

trichter commented Jan 24, 2018

@calum-chamberlain Using your notebook I compared the performance of the new implementation with bottleneck. It is only marginally slower. The original numpy method failed with a memory error using data corresponding to 1 day sampled at 100Hz.

indeces = np.arange(0, 24*3600, 0.01) # A day of data sampled at 100 Hz
data = signal.gausspulse(indeces - 2000, fc=0.5)
data += 0.3 * np.random.randn(len(data))
template_start, template_stop = (198000, 202000) # Lets use a little longer template too, which means a longer rolling window
template = data[template_start: template_stop]
# no normalization
%timeit correlate_template(data, template, normalize=None)
%memit correlate_template(data, template, normalize=None)
1.07 s ± 4.18 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
peak memory: 967.45 MiB, increment: 532.99 MiB
# normalization with bottleneck
%timeit normxcorr(data, template, (len(data) // 2) - (len(template) // 2), demean=True, domain='freq', norm_method='bn')
%memit normxcorr(data, template, (len(data) // 2) - (len(template) // 2), demean=True, domain='freq', norm_method='bn')
1.16 s ± 7.52 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
peak memory: 1000.42 MiB, increment: 565.96 MiB
# normalization with np.cumsum
%timeit correlate_template(data, template)
%memit correlate_template(data, template)
1.36 s ± 7.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
peak memory: 901.53 MiB, increment: 467.07 MiB
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment