Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to limit concurrent invocations? #43

Closed
mmkal opened this issue Mar 4, 2022 · 4 comments
Closed

Option to limit concurrent invocations? #43

mmkal opened this issue Mar 4, 2022 · 4 comments

Comments

@mmkal
Copy link

mmkal commented Mar 4, 2022

Hi! Is it possible to use p-memoize to optimistically memoize promises when several are kicked off before any results have come back? Here's the scenario we're hitting:

const getData = async params => {
  const data = await getDataFromSomeAPI(params) // quite slow
  return data.foo
}

export default pMemoize(getData, { cacheKey: JSON.stringify })

This function is on our server side and ends up being hit by next.js as part of static-site generation (and during ISR). But what this means is that it gets hit hundreds of times simultaneously, once per page that's being generated. The API we're using has a pretty aggressive rate limit, so we're aiming to use pMemoize to get around that, since all of the pages hit the backend with the same parameters.

The workaround we're trying now:

import pMemoize from 'p-memoize'
import pLimit from 'p-limit'

const memoize = (fn, options) => {
  const getLimiter = pMemoize(async (...args) => pLimit(1), { cacheKey: JSON.stringify })
  const memoized = pMemoize(fn, {cacheKey: JSON.stringify, ...options})
  return (async (...args) => {
    const limiter = await getLimiter(...args)
    return limiter(async () => memoized(...args))
  })
}

Effectively, this means that memoizable calls will wait until the "first" attempt has settled, and use the memoized result if it succeeded. If the first attempt failed they'll run the full funciton.

Which seems to work, but is pretty awkward to write and involves adding the p-limit dependency manually. Is there a better/recommended way of doing this? If not, maybe there could be an extra option for it?

@sindresorhus
Copy link
Owner

sindresorhus commented Mar 20, 2022

I believe this is how it worked before #32.

Also see: #34

@Richienb
Copy link
Contributor

Implementation:

Use a queue to keep track of deferred promises returned from each invocation. Every time an item is added, check if the amount of currently running tasks is low enough to start it right away. Otherwise, this will be discovered as soon as a task finishes executing. Since the original intention of the refactor was to allow communication with a database, should the global amount of running tasks be stored there?

Do executions that hit the cache count towards the operation count? Does checking the cache in the first place count?

@Richienb
Copy link
Contributor

Richienb commented Jul 5, 2022

Your use case appears to be default behaviour now as of v7.

@mmkal
Copy link
Author

mmkal commented Jul 5, 2022

Ok, great. Not quite ready to upgrade but can reopen or make a new issue when I try it out

@mmkal mmkal closed this as completed Jul 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants