Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The cache should give out "leases" if same URL is requested concurrently #33

Closed
Kuzmin opened this issue Jul 14, 2016 · 15 comments
Closed

Comments

@Kuzmin
Copy link

Kuzmin commented Jul 14, 2016

Right now, if you request the same URL concurrently, both requests will bypass the cache and drop into the express route. I think it would be preferable if only the request seen first by express drops into the route, and the 2nd request will await the response from the route, and return that.

I'm thinking that it should be possible to create a shared promise per URL, that once resolved, returns JSON to the clients. This way, 1000 concurrent requests would look like 1 request to the underlying app. I'm thinking that this is the more preferable than current behaviour.

This becomes a significant problem if you have a relatively short cache duration with a lot of concurrent requests, since every cache-duration seconds, your app will halt to a crawl before new cache keys get inserted.

@kwhitley
Copy link
Owner

Very solid idea @Kuzmin... if you want to implement a promise-based solution and submit a PR I'd be happy to review and merge. Otherwise, I'll add this to the list!

The core of this lib is by now quite dated and needs a complete rewrite (with or without backwards compatibility hooks)... this is definitely something to add to the requirements for that at the very least!

@Kuzmin
Copy link
Author

Kuzmin commented Sep 23, 2016

Did a test implementation of this in an own package since I felt I didn't need all the bells and whistles that this package has. It seems to work rather well, but I really need to add more test cases.

The code can be found here: https://github.com/Kuzmin/node-cache-middleware

Feel free to rip the relevant parts. If you want, I can do these changes when I get some free time.

@kwhitley
Copy link
Owner

Awesome, thanks @Kuzmin - I agree re: the package... upcoming major release is going to be so much leaner, and just more easily allow for user extension... I'll let everyone come up with their own flavor of modding it that way.

@svozza
Copy link
Contributor

svozza commented Jan 18, 2017

This feature would be very useful for me, I'd be more than willing to have a crack at it and submit a PR. Although, I'd rather wait until #62 has been merged before I start.

@Kuzmin
Copy link
Author

Kuzmin commented Jan 18, 2017

Hey @svozza, I don't know what features you need but you could take a look at my implementation a bit higher up. It has less features than this project, but might be enough for what you need until apicache has this added! I've tested my implementation in production for a few months now and it handles load nicely :)

@Kuzmin
Copy link
Author

Kuzmin commented Jan 18, 2017

Actually, looking again, it seems like apicache has added a bunch of features since the last time I was here, so it might not be as feasible as I thought. But yeah, just wanted to show the option.

@svozza
Copy link
Contributor

svozza commented Jan 18, 2017

Yes, I did see it and it looks easy enough to create an adapter for Redis but I'm quite happy with apicache at the moment. What sort of traffic are you doing in production?

@kwhitley
Copy link
Owner

@Kuzmin @svozza my main concern is making sure the API/interface remains unchanged to prevent legacy breakage. Given the number of downloads a month, that's kind of an issue at this point. I do plan a complete rewrite for version 1.0.0 but would still like to continue support of most of the v0.X.X interface if possible...

That said, I love the lease concept to prevent duplicate parallel calls, but not sure (off hand without much thought mind you) how to accomplish this without callbacks or promises at this point. Thoughts?

@svozza
Copy link
Contributor

svozza commented Jan 19, 2017

Yeah, my only consideration is preventing dogpiling, I didn't envisage any API changes, it should be completely transparent to the user.

@svozza
Copy link
Contributor

svozza commented Jan 19, 2017

Ah, I see what you mean about without callbacks, the in memory version is synchronous. We could only support this feature for Redis, given that the node-redis client has an asynchronous API.

@Kuzmin
Copy link
Author

Kuzmin commented Jan 19, 2017

Hey @kwhitley! You could check my implementation, it just passes the first request to a path through, and then stores the subsequent calls in an array. As the first call completes, it iterates through this array and sends out the response to these. It doesn't use promises or callbacks in a more heavy way than your current implementation.

Does this make any sense? I could try expanding the explanation if not. (Just in a bit of a time crunch personally right now.)

@Kuzmin
Copy link
Author

Kuzmin commented Jan 19, 2017

To clarify regarding callbacks: The only callbacks that I have used are the ones that hook into the res.end/res.write events, the rest of the code is just moving res objects around arrays in a synchronous fashion. So it is "synchronous" in that sense.

@kwhitley
Copy link
Owner

This will be rolled into v1.x

@matteinn
Copy link

Hey @kwhitley, is this still planned for future releases?

@steveocrypto
Copy link

Any chance this update is coming soon? @kwhitley Running into the same issue where concurrent requests are bypassing cache, and results in massive duplicate keys in the index.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants