New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using multiple connection strings #74
Comments
I'm not sure, it's definitely a question for the node_redis package, as Bottleneck is simply passing your options through to that package. |
@SGrondin lots of people are asking for this, but it seems like this lib is no longer maintained. |
I've started looking into abstracting away the Redis library used by Bottleneck, to make it work with both |
@SGrondin not sure if it has to do with work load or your LUA scripts perhaps (surpasses my redis competence at this point), but I get a lot of I initially thought this is related to a conflict of limiter ids not being unique over my staging/prod environments (which share the same redis db), but even after fixing them I'm getting these errors. I have another app with even bigger load which used to share the same db, but would not get these errors. According to my compose.io metrics (my redis provider) it looks like the app is simply having too many connections open, but even as I scaled my TCP portals up (gateways to redis), bottleneck simply seems to keep opening connections and never closing them |
Also, @SGrondin have you got any timeline for this? |
I've just started looking into it, I'll need to reproduce it before I can give a timeline. Could you answer the following questions? It will greatly help me reproduce the issue you're seeing.
Thanks! |
Hi @SGrondin I'm a colleague of @TheGame2500. I thought i'd answer while he is on holiday as our current Bottleneck setup is unstable. Our Redis server (with Compose.com) has a hard limit of 4000 maxclients (concurrent connections), and somehow we are creating 4 new connections a minute - meaning we have to restart the server every 18 hours to disconnect all and start again. The server is configured to disconnect clients after 10 seconds of idle EXCEPT for pub/sub connections. So the issue could be that we're creating more pub/sub connections. We are creating a Group like this
There are also a couple of single queues (not using Groups) like
We then enqueue requests like this
It's not clear from your documentation where we should call ~400 requests/minute to Redis |
Thank you for the very detailed report, it's very helpful. This is my top priority at the moment. It's hard to give a timeframe since I also have a full time job and many other responsibilities, but I'll commit to having a fix within 14 days. Thank you for trusting me and Bottleneck, reliability has always been the most important "feature" of this project. I'll keep you updated. |
@SGrondin Totally understand you have a day job too! As an update, I set the timeout for the Bottleneck group at 30 seconds, and added an expiration time for each job at 60 seconds. That seems to have fixed the problem of exceeding maximum connections for now. It seems that, since the Bottleneck settings are stored in Redis, we needed to call And the original request to use |
Yes, that will help a lot, since it will free up connections as soon as a group key times out. I forgot to document this important point in the Clustering docs. 😞 Note: job expirations don't affect connections. The problem is that the Redis network protocol is not full duplex, which means only one request can be executed at a time, per connection, even if they don't try to access the same keys. In other words, the requests can't overlap in time: it is strictly Request-Response. But Redis is single-threaded, so it shouldn't matter, right? Technically yes, but that ignores network latency and Redis Cluster/Sentinel. By creating multiple connections we can increase performance several times over by negating the effects of network latency. When I first added Clustering to Bottleneck, I decided to favor performance instead of reusing connections, since I expected users to set their own Group timeouts. I thought it would be more common for users to have few a limiters accessed often than a lot of limiters accessed rarely. I was wrong! At the moment, limiters manage their own connections: one for requests and one for pubsub. Obviously, it causes issues when Groups become very large, which is why choosing a good Group I've just finished implementing connection reuse within Groups. Every limiter within a Group will now share the same 2 connections. Standalone limiters will continue to have their own. This setting will be enabled by default and I'll document the tradeoffs. It will be released in the next few days with This work also paves the way to add support for IoRedis. That's the next feature on the roadmap.
v2.7.0 will let you do: const queues = new Bottleneck.Group({
maxConcurrent: 10,
minTime: 1100,
// clustering
datastore: 'redis',
clearDatastore: false,
clientOptions: process.env.REDIS_URL,
timeout: 60 * 1000
}
) |
Thanks for the explanation, and we look forward to IoRedis support.
*Edward Upton*
Littledata
<https://www.littledata.io/?utm_medium=email&utm_source=edward&utm_campaign=footerLink>
| @littledatauk <https://twitter.com/littledatauk>
We offer a free data audit
<https://www.littledata.io/features/audit?utm_medium=email&utm_source=edward&utm_campaign=footerLink>
, Shopify integration
<https://www.littledata.io/shopify?utm_medium=email&utm_source=edward&utm_campaign=footerLink>,
and custom reporting
<https://www.littledata.io/features/report-packs?utm_medium=email&utm_source=edward&utm_campaign=+footerLink>
for
ecommerce. Littledata is a Google Analytics Service Partner.
+44 (0)20 8050 2081 | LinkedIn <https://www.linkedin.com/in/edwardupton> |
Skype: edwardupton
20-22 Wenlock Road, London N1 7GU, United Kingdom
…On Sun, Aug 5, 2018 at 12:18 AM, Simon Grondin ***@***.***> wrote:
As an update, I set the timeout for the Bottleneck group at 30 seconds,
and added an expiration time for each job at 60 seconds. That seems to have
fixed the problem of exceeding maximum connections for now.
Yes, that will help a lot, since it will free up connections as soon as a
group key times out. I forgot to document this important point in the
Clustering docs. 😞 Note: job expirations don't affect connections.
The problem is that the Redis network protocol is not full duplex, which
means only one request can be executed at a time, per connection, even if
they don't try to access the same keys. In other words, the requests can't
overlap in time: it is strictly Request-Response.
But Redis is single-threaded, so it shouldn't matter, right? Technically
yes, but that ignores network latency and Redis Cluster/Sentinel. By
creating multiple connections we can increase performance several times
over by negating the effects of network latency.
When I first added Clustering to Bottleneck, I decided to favor
performance instead of reusing connections, since I expected users to set
their own Group timeouts. I thought it would be more common for users to
have few a limiters accessed often than a lot of limiters accessed rarely.
I was wrong! At the moment, limiters manage their own connections: one for
requests and one for pubsub. Obviously, it causes issues when Groups become
very large, which is why choosing a good Group timeout value is so
important.
I've just finished implementing connection reuse within Groups. Every
limiter within a Group will now share the same 2 connections. Standalone
limiters will continue to have their own. I plan enabling this setting by
default and document the tradeoffs. It will be released in the next few
days with v2.7.0. I'll update you once it's released.
This work also paves the way to add support for IoRedis. That's the next
feature on the roadmap.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#74 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACvnKba-bzAE6TNmfQ_GwSW0_09jMmblks5uNg-ngaJpZM4VMhwd>
.
|
v2.7.0 has been released, you'll be able to pass the It also changes how Redis connections are managed. All the limiters created by a Group share the same connection. Standalone limiters have their own connections. Make sure to listen to the Thanks to the connection changes, you're not forced to set a low I'll update this issue once ioredis is supported. |
@SGrondin thanks! Will update today and let you now how this goes for us! |
@SGrondin unfortunately went poorly. Lots of LE: seems like the NOSCRIPT command is preceded by an insightful warning: |
I'm investigating this immediately. Can you paste that warning? I'll be able to figure out which call is trying to pass blank arguments. I've figured out the issue, will release a hotfix today. |
v2.7.1 should fix the NOSCRIPT error. Thank you for letting me know quickly and I'm sorry for not catching this problem. I'm adding tests to ensure it doesn't happen again. |
@SGrondin thanks for the quick fix. No worries, happens to everyone. I've updated to 2.7.1, all seems fine and I'll check how it's going in 30 mins. |
@SGrondin works fine for now, will let you know if we encounter further issues. Thank you so much for your work! |
Awesome! 🎉 I'm glad to hear that. I've created #78 to track progress of |
v2.8.0 has been released, it adds support for ioredis, Redis Cluster and Redis Sentinel
|
I have to tcp gateways to my redis db's. How could I use multiple connection strings so that if one fails it failovers to second one?
The text was updated successfully, but these errors were encountered: