Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use rfft when appropriate in autocorrelate, fixes #1526 #1565

Merged
merged 1 commit into from Sep 9, 2022

Conversation

bmcfee
Copy link
Member

@bmcfee bmcfee commented Sep 6, 2022

Reference Issue

Fixes #1526

What does this implement/fix? Explain your changes.

This PR rewrites autocorrelate to use (i)rfft when provided with real-valued inputs. This should cut the memory usage approximately in half.

Any other comments?

This PR is definitely an improvement, but I'm not 100% satisfied yet as noted in the discussion thread of #1526. We still are computing a full inverse DFT and then cropping the result, rather than only computing what is actually needed. Perhaps this is the best we can do, given how the fft API is defined.

@bmcfee bmcfee added the enhancement Does this improve existing functionality? label Sep 6, 2022
@bmcfee bmcfee added this to the 0.10.0 milestone Sep 6, 2022
@bmcfee
Copy link
Member Author

bmcfee commented Sep 6, 2022

Quick memray benchmark results with the following snippet:

import numpy as np
import librosa


x = np.random.randn(60 * 22050)
librosa.autocorrelate(x, max_size=22050)

Current main:

Start time: 2022-09-06 15:09:05.431000
End time: 2022-09-06 15:09:13.142000
Total number of allocations: 552301
Total number of frames seen: 12694
Peak memory usage: 547.2 MB
Python allocator: pymalloc

This PR:

Start time: 2022-09-06 15:07:34.459000
End time: 2022-09-06 15:07:41.300000
Total number of allocations: 551834
Total number of frames seen: 12703
Peak memory usage: 261.4 MB
Python allocator: pymalloc

@codecov
Copy link

codecov bot commented Sep 6, 2022

Codecov Report

Base: 98.75% // Head: 98.75% // Increases project coverage by +0.00% 🎉

Coverage data is based on head (64e11bd) compared to base (dc0ed20).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1565   +/-   ##
=======================================
  Coverage   98.75%   98.75%           
=======================================
  Files          32       32           
  Lines        4028     4029    +1     
=======================================
+ Hits         3978     3979    +1     
  Misses         50       50           
Flag Coverage Δ
unittests 98.75% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
librosa/core/audio.py 98.92% <100.00%> (+<0.01%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@bmcfee
Copy link
Member Author

bmcfee commented Sep 6, 2022

After a bit of homework and reading https://www.fftw.org/pruned.html - I'm content to leave this one as it currently is. Hacking the inverse to be more memory-efficient seems like more hassle than it's worth at this point.

@bmcfee
Copy link
Member Author

bmcfee commented Sep 9, 2022

Unless there are objections, I'll merge this today.

Copy link
Contributor

@lostanlen lostanlen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice. thanks for this

@lostanlen lostanlen merged commit ed422f8 into main Sep 9, 2022
@bmcfee bmcfee deleted the autocorrelate-1526 branch September 28, 2022 18:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Does this improve existing functionality?
Development

Successfully merging this pull request may close these issues.

Improve autocorrelation implementation
2 participants