-
-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The cache should give out "leases" if same URL is requested concurrently #33
Comments
Very solid idea @Kuzmin... if you want to implement a promise-based solution and submit a PR I'd be happy to review and merge. Otherwise, I'll add this to the list! The core of this lib is by now quite dated and needs a complete rewrite (with or without backwards compatibility hooks)... this is definitely something to add to the requirements for that at the very least! |
Did a test implementation of this in an own package since I felt I didn't need all the bells and whistles that this package has. It seems to work rather well, but I really need to add more test cases. The code can be found here: https://github.com/Kuzmin/node-cache-middleware Feel free to rip the relevant parts. If you want, I can do these changes when I get some free time. |
Awesome, thanks @Kuzmin - I agree re: the package... upcoming major release is going to be so much leaner, and just more easily allow for user extension... I'll let everyone come up with their own flavor of modding it that way. |
This feature would be very useful for me, I'd be more than willing to have a crack at it and submit a PR. Although, I'd rather wait until #62 has been merged before I start. |
Hey @svozza, I don't know what features you need but you could take a look at my implementation a bit higher up. It has less features than this project, but might be enough for what you need until |
Actually, looking again, it seems like |
Yes, I did see it and it looks easy enough to create an adapter for Redis but I'm quite happy with |
@Kuzmin @svozza my main concern is making sure the API/interface remains unchanged to prevent legacy breakage. Given the number of downloads a month, that's kind of an issue at this point. I do plan a complete rewrite for version 1.0.0 but would still like to continue support of most of the v0.X.X interface if possible... That said, I love the lease concept to prevent duplicate parallel calls, but not sure (off hand without much thought mind you) how to accomplish this without callbacks or promises at this point. Thoughts? |
Yeah, my only consideration is preventing dogpiling, I didn't envisage any API changes, it should be completely transparent to the user. |
Ah, I see what you mean about without callbacks, the in memory version is synchronous. We could only support this feature for Redis, given that the |
Hey @kwhitley! You could check my implementation, it just passes the first request to a path through, and then stores the subsequent calls in an array. As the first call completes, it iterates through this array and sends out the response to these. It doesn't use promises or callbacks in a more heavy way than your current implementation. Does this make any sense? I could try expanding the explanation if not. (Just in a bit of a time crunch personally right now.) |
To clarify regarding callbacks: The only callbacks that I have used are the ones that hook into the |
This will be rolled into v1.x |
Hey @kwhitley, is this still planned for future releases? |
Any chance this update is coming soon? @kwhitley Running into the same issue where concurrent requests are bypassing cache, and results in massive duplicate keys in the index. |
Right now, if you request the same URL concurrently, both requests will bypass the cache and drop into the express route. I think it would be preferable if only the request seen first by express drops into the route, and the 2nd request will await the response from the route, and return that.
I'm thinking that it should be possible to create a shared promise per URL, that once resolved, returns JSON to the clients. This way, 1000 concurrent requests would look like 1 request to the underlying app. I'm thinking that this is the more preferable than current behaviour.
This becomes a significant problem if you have a relatively short cache duration with a lot of concurrent requests, since every cache-duration seconds, your app will halt to a crawl before new cache keys get inserted.
The text was updated successfully, but these errors were encountered: