Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't refresh ttl on cached keys #603

Closed
chrisskilton opened this issue Sep 5, 2018 · 8 comments
Closed

Don't refresh ttl on cached keys #603

chrisskilton opened this issue Sep 5, 2018 · 8 comments

Comments

@chrisskilton
Copy link

I've got a basicgot request setup with a redis keyv instance as the cache:

got.stream(target, {cache});

If i make a request to /somepath the data gets cached in redis with the ttl I set on the keyv instance, which is great.

If I make a second request to /somepath within the key's ttl the data gets returned from the cache, which is also great.

The problem is the second request reset the ttl in the cache. So if requests kept coming in the data will never be cleared from the cache, which is bad (for my use case).

Is there an existing bit of config that I'm missing for this behaviour? I've tried setting the max-age header and various other cache-control headers but feel I'm missing something!

Any help would be greatly appreciated

@szmarczak
Copy link
Collaborator

cacheable-request nor keyv resets TTL when retrieving an element. Redis does not too.

Could you please include a full example?

@chrisskilton
Copy link
Author

@szmarczak sure thing. thanks for the response. here's some example code:

test server - this will increment an integer and return the string on every call to it

const http = require('http');

let response = 0;

const server = http.createServer((req, res) => {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    res.end(response.toString());
    response++;
});

server.listen(3000);

Test - this will use a regular keyv instance as a cache with got, using a ttl of 3000

const got = require('got');
const Keyv = require('keyv');
const cache = new Keyv({
    ttl: 3000
});

const fetch = async () => {
    const response = await got('http://localhost:3000', {cache});

    console.log('Served from cache:', response.fromCache);
    console.log('Response:', response.body);
};

const wait = () => {
    console.log('waiting 1s...');
    return new Promise(res => setTimeout(res, 1000));
};

const run = async () => {
    //first call should be a cache miss
    await fetch();
    await wait();

    //1s after first call should be a cache hit
    await fetch();
    await wait();

    //2s after first call should be a cache hit
    await fetch();
    await wait();

    //3s after first call should be a cache miss again
    await fetch();
    await wait();

    //4s should be a cache hit
    await fetch();
    await wait();

    //5s should be a cache hit
    await fetch();
    await wait();
};

run();

Each call after the initial cacheing resets the cached value ttl to 3s again.

Console output:

Served from cache: false
Response: 0
waiting 1s...
Served from cache: true
Response: 0
waiting 1s...
Served from cache: true
Response: 0
waiting 1s...
Served from cache: true
Response: 0
waiting 1s...
Served from cache: true
Response: 0
waiting 1s...
Served from cache: true
Response: 0
waiting 1s...

As you can see after the first call, if there isn't a full 3s wait, the cached data isn't ever cleared.

@chrisskilton
Copy link
Author

got a feeling I'm making the same assumption you did here #284 (comment)

@szmarczak
Copy link
Collaborator

Could you try Got 9.2.1?

@chrisskilton
Copy link
Author

@szmarczak That version doesn't appear to ever hit the cache:

Using got 9.2.1:

Served from cache: false
Response: 0
waiting 1s...
Served from cache: false
Response: 1
waiting 1s...
Served from cache: false
Response: 2
waiting 1s...
Served from cache: false
Response: 3
waiting 1s...
Served from cache: false
Response: 4
waiting 1s...
Served from cache: false
Response: 5
waiting 1s...

@szmarczak
Copy link
Collaborator

Could you try setting cache control (max age) to 3 seconds and try with 9.2.1?

@chrisskilton
Copy link
Author

adding cache control to the test server:

res.writeHead(200, {
    'Content-Type': 'text/plain',
    'Cache-Control': 'max-age=3'
});

yields:

Served from cache: false
Response: 0
waiting 1s
Served from cache: true
Response: 0
waiting 1s
Served from cache: true
Response: 0
waiting 1s
Served from cache: false
Response: 1
waiting 1s
Served from cache: true
Response: 1
waiting 1s
Served from cache: true
Response: 1
waiting 1s

Bingo. OK so the ttl is purely for the key/value store to be able to clean up after itself, not any sort of control over how long you want to cache any particular responses. All the caching decisions are made based on the headers on the resource coming back (the response).

Thanks for helping.

@szmarczak
Copy link
Collaborator

the ttl is purely for the key/value store to be able to clean up after itself, not any sort of control over how long you want to cache any particular responses.

No, it just sets default TTL if server didn't specify.

There was a bug in Got and it's fixed in 9.2.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants