Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TTL must be slightly longer than the time it takes for the promise to resolve. #16

Closed
garand opened this issue Dec 2, 2022 · 16 comments
Labels
awaiting feedback On hold, waiting for the author to provide more informations

Comments

@garand
Copy link

garand commented Dec 2, 2022

In my use case I was trying to set ttl: 0 and staleWhileRevalidate: Infinity so that every request after the initial request would always trigger a refetch in the background, however in my tests, if the ttl value is less than the time the promise takes, it will ALWAYS refetch.

This is my example that always refetches.

  return await cachified({
    key: "test",
    cache: cache,
    getFreshValue: () =>
      new Promise((resolve) => {
        setTimeout(() => {
          console.log("resolving");
          resolve(new Date().toLocaleString());
        }, 5000);
      }),
    ttl: 3000,
    staleWhileRevalidate: Infinity,
  });

And if I change the promise timeout to 2000, it will always return the cached value and refetch in the background (after the ttl).

return await cachified({
    key: "test",
    cache: cache,
    getFreshValue: () =>
      new Promise((resolve) => {
        setTimeout(() => {
          console.log("resolving");
          resolve(new Date().toLocaleString());
        }, 2000);
      }),
    ttl: 3000,
    staleWhileRevalidate: Infinity,
  });

And because of this I can't set the ttl to 0 to get my desired outcome.

Let me know if you need any additional info, thanks for this library it's exactly what I've been looking for!

@Xiphe
Copy link
Collaborator

Xiphe commented Dec 14, 2022

Hey @garand thanks for the feedback! Glad the library is helpful to you 👍

I'm afraid I don't fully understand what you're trying to achieve. Could you provide a list of calls with timestamps and the expected behavior vs the observed behavior?

Heads up: I'm on vacation until January, I'll try to look for your answer within the next days but please be patient if it takes a little longer

@Xiphe
Copy link
Collaborator

Xiphe commented Jan 9, 2023

Hi @garand. I'm back from vacation. Is this still relevant?

@Xiphe Xiphe added the awaiting feedback On hold, waiting for the author to provide more informations label Jan 9, 2023
@garand
Copy link
Author

garand commented Jan 9, 2023

Yes! I can put together a clearer example later today if that helps.

@Xiphe
Copy link
Collaborator

Xiphe commented Jan 9, 2023

That would be great, thank you!

@garand
Copy link
Author

garand commented Jan 9, 2023

Here is a reproduction with some comments to hopefully better illustrate what I'm seeing. https://stackblitz.com/edit/express-simple-8bmtng?file=index.js

Please let me know if you need any more details!

@Xiphe
Copy link
Collaborator

Xiphe commented Jan 10, 2023

Hey @garand. thanks for the detailed answer now I understand!
The behavior you're observing is actually intentional. The time it takes to get a fresh value is part of the TTL.

You should be able to observe that while the TTL is valid only one fresh value is requested.

Given Your example with 5s DELAY and 3s TTL

Call Nr Time Has Key Cache Resolved Fresh Value Requested Will resolve in seconds with value from call
0 0s ⏳ pending 5s 0
1 0.5s ⏳ pending 4.5 0
2 1.5s ⏳ pending 3.5 0
3 4.5s ⏳ pending 5s 3
3 6s ⏳ pending 3.5s 3

Note how "Cache Resolved" always is pending since once getFreshValue resolves, the TTL is already over and the resolved value is never actually written to the cache implementation. I think this is what you've expected to behave differently right?

Maybe think of it like the promised fresh value is cached, not the resolved one.


That said, is there a good reason why you would want the TTL and the DELAY to be additive?

@Xiphe
Copy link
Collaborator

Xiphe commented Jan 24, 2023

closing this due to inactivity and since I consider this more a discussion then a bug in the library. Let me know how I can be of further assistance.

@TkDodo
Copy link

TkDodo commented Feb 27, 2024

hey @Xiphe, coming to this issue from a similar requirement. We compute all our ttl from the Cache-Control header sent back from the API call we make inside getFreshValue. The expectation is that if we get back 'Cache-Control': 'max-age=5', it will be cached for 5 seconds after we've received the response.

In react-query, we also only start the staleTime / refetchInterval counters after data has been resolved.

Maybe think of it like the promised fresh value is cached, not the resolved one.
That said, is there a good reason why you would want the TTL and the DELAY to be additive?

The reason is mostly that you cannot control how long a request / function call takes. Maybe there are retries with exponential backoff involved, so a request can take up to 30s. If we then want the result to be cached for 30s, once we get the result back that time will already be over. I'm curious why you think this is the better default behaviour ?

@Xiphe
Copy link
Collaborator

Xiphe commented Feb 28, 2024

Hi @TkDodo - I think most of this discussion had been covered in #94. In summary I think the default behavior of cachified has slightly different design goals compared to what you're after.

That said, the behavior you're looking for (from what I understand) can be achieved with the current APIs:

function getSomething(id: string) {
  return cachified({
    key: `something-${id}`,
    cache,
    async getFreshValue({ metadata }) {
      const response = await fetch(
        `https://jsonplaceholder.typicode.com/users/${id}`,
      );
      const data = await response.json();

      metadata.createdTime = Date.now(); // 👈 2️⃣ Start ttl from now, not from when the initial request was made      
      metadata.ttl = getTTLFromCacheControlHeader(response); // 👈 3️⃣ set to -1 when no cache-control header is present

      return data;
    },
    ttl: Infinity, // 👈 1️⃣  allow `getFreshValue` to take forever
  });
}

@TkDodo
Copy link

TkDodo commented Feb 28, 2024

thanks @Xiphe, that's an interesting approach. I didn't know we could alter the createTime like that 👏 .

One problem I'm seeing is that those requests that we don't want to cache will also wind up in the cache for a short period of time. For example, if my lru-cache has a maxSize of 5, and it already has 4 entries in it that are in there because it has a valid ttl, if 2 more requests are fired that will get a ttl of -1 afterwards, they would delete one of the 4 "good cache entries".

But I guess this is unavoidable, right ?

One more question to the above approach: If I set ttl to -1 when assigning it to metadata, will this immediately evict the entry from the cache?

Thank you 🙏

@Xiphe
Copy link
Collaborator

Xiphe commented Feb 28, 2024

I didn't know we could alter the createTime like that 👏 .

It's not well documented. I guess 🤔

One problem I'm seeing is that those requests that we don't want to cache will also wind up in the cache for a short period of time. For example, if my lru-cache has a maxSize of 5, and it already has 4 entries in it that are in there because it has a valid ttl, if 2 more requests are fired that will get a ttl of -1 afterwards, they would delete one of the 4 "good cache entries".

That should™ not be the case.
Is that confirmed behavior? The in-flight pending value cache uses a Map and not the lru-cache.
When ttl is modified to -1, the original cache should never be hit.

If I set ttl to -1 when assigning it to metadata, will this immediately evict the entry from the cache?

I'm not 100% sure here... I think yes. The ttl of a previously cached value should be over at this point. An interesting edge-case that we might not currently consider is: What if we're in a stale while validate refresh, and then set metadata.ttl: -1 - technically this should keep the old entry in the cache.
Let me know when I should double-check this behavior.

@TkDodo
Copy link

TkDodo commented Feb 29, 2024

That should™ not be the case.
Is that confirmed behavior? The in-flight pending value cache uses a Map and not the lru-cache.
When ttl is modified to -1, the original cache should never be hit.

Oh that's great. I'll try it out and confirm today.

@TkDodo
Copy link

TkDodo commented Feb 29, 2024

@Xiphe you are absolutely right - the cache is not hit if we set ttl to -1. This pattern works really well for me, thank you 🙏

@garand
Copy link
Author

garand commented Mar 1, 2024

Thanks for the additional conversation here, that's helpful for me!

For my uses cases, I'm wrapping the cachified function to add some defaults. Based on the above convo I updated it to this, which seems to do what I was initially talking about.

Is there negative impacts of this that I'm not aware of?

return cachified({
  ttl: ms("1m"),
  ...options,
  cache: lru,
  key: keyPrefix + options.key,
  async getFreshValue(context) {
    const freshValue = await options.getFreshValue(context);

    /**
     * Set the created time for the cache entry to now,
     * this allows the ttl to start from the time the
     * the value was returned from the `getFreshValue` function.
     */
    context.metadata.createdTime = Date.now();

    return freshValue;
  },
});

@kentcdodds
Copy link
Member

If anyone wants to contribute documentation on this point that is welcome! :)

@Xiphe
Copy link
Collaborator

Xiphe commented Mar 6, 2024

Is there negative impacts of this that I'm not aware of?

@garand nothing coming to mind. But it's strongly dependent on your use-case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting feedback On hold, waiting for the author to provide more informations
Projects
None yet
Development

No branches or pull requests

4 participants