Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Allow user configuration of pocketfft's plan caches #12512

Closed
wants to merge 4 commits into from

Conversation

peterbell10
Copy link
Member

@peterbell10 peterbell10 commented Jul 8, 2020

Reference issue

Closes gh-12297

What does this implement/fix?

This allows the user to interface with scipy.fft's plan caches using 3 utility functions:

  • set_plan_cache_size(new_size)
  • get_plan_cache_size()
  • clear_plan_cache()

Limitations:

  • Only controls the number of plans, not amount of memory.
  • For simplicity, all plan types have the same size of cache.

Additional information

The C++ implementation introduces two new classes, PlanCache and CacheManager. A plan cache handles LRU caching for a single plan type, similar to what get_plan did before. PlanCache instances are constructed within the CacheManager which keeps an internal reference to every cache. This way I can query and manipulate the caches uniformly by iterating over the CacheManager's internal list of caches.

Note that inside of any given transform, get_plan() interacts directly with the PlanCache and doesn't need to go through the CacheManager. So it exists only for configuration purposes.

TODO:

  • Add tests
  • Expand documentaion

cc @mreineck, @larsoner

@peterbell10 peterbell10 added enhancement A new feature or improvement scipy.fft labels Jul 8, 2020
@peterbell10
Copy link
Member Author

peterbell10 commented Jul 12, 2020

I've added a new customisation point, set_plan_cache_max_memsize which limits the maximum memory used for the twiddle factors of a single plan. Anything above that size won't be cached. I've set the default to 2 MB which together with the 16 max plans and the number of internal caches means an absolute upper bound of 576 MB can be taken up by the caches. And to get that to happen you need to use every combination of transform type (fft, rfft, dct and dst) and data type (np.float, np.double, np.longfloat).

@mreineck
Copy link
Contributor

I like the approach, especially the aspect that it comes with a cache size limited by default, so that crashes like the one in #12297 are avoided without users having to learn about the extended interface.

@larsoner
Copy link
Member

Agreed this sounds like a good approach

leofang added a commit to leofang/cupy that referenced this pull request Aug 5, 2020
@leofang leofang mentioned this pull request Aug 6, 2020
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement scipy.fft
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Repeatedly calling scipy.signal.resample uses more memory than expected
3 participants