-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow setting maximum age for http Agent's keep-alive #6713
Comments
Yes, you're responsible for closing it. But if you have a reference to the socket anyway, just destroy it. The agent removes it from the pool when it sees the 'close' event. |
@bnoordhuis thanks! I can see that Node.js uses the 'free' event on the socket to determine when to return a socket to the pool, and I guess might be where I need to hook into to be able to destroy the socket after the http request/response has completed? Or is there a better event to listen for? In terms of the actual Node.js feature, would you be willing to accept a PR once I figured out a robust way to achieve this? I think it would be a useful addition. |
I won't say 'no' outright but, seeing how it's something that can be accomplished without undue effort in user code, I wouldn't rate the chances of it getting accepted very high. |
@bnoordhuis I agree that it is something can be done on the client side but I would argue that managing the connection pool is the responsibility of the agent. The agent currently exposes a very limited options to tune the connection pool (e.g. no idleTimeout either which is common in connection pool options). I think it would be beneficial to the community if more tunable options like the one Sam suggested is added. Maybe it should also allow user to control the number requests a connection can handle before it get destroyed (which another common option in connection pool managers). |
@Jackyjjc I'm not the one you need to convince (that's the HTTP WG) but let me say that the agent was expressly designed to be easy to override so it doesn't have to be a kitchen sink. |
Yeah, generally have to agree with @bnoordhuis on this one. It's not likely to happen. @nodejs/http |
Do I understand correctly that the recommended way to solve this issue is to implement the functionality in a separate Agent and use that with I'm a little surprised by that because nothing in the documentation indicates that the Agent is meant to be replaced, as there is no documentation on the concrete interface the agent is supposed to provide. Should I have to rely on the Node.js source code code to implement an alternative agent, or are there plans to document the Agent interface? |
@srijs have you since this issue was discussed implemented the max-age yourself? Looking for reference code because we want to use keep-alive to servers which actively close the connection after certain timelimits have hit (not always gracefully thus leaving us hanging in the dark) |
We also ran into similar issue. Our data centers does failover switching etc through a global DNS balancer, but with Node's agent default to using name for keep alive, the existing connections don't switch. When there's high traffic, the timeout on these connections never trigger. This is what we did https://github.com/electrode-io/electrode-keepalive to solve the issue. |
Haven't seen this issue before. FWIW I'm trying to address this by making Agent more customizable: #13005 . PTAL |
I believe this has now been addressed by #13005 and it's also being backported to 6.x. That said, please feel free to reopen if you feel like I misunderstood or there's a part of this that still needs addressing. |
As far as I understand, #13005 offers an API that allows you to implement a timeout, but there still is no simple option to set the timeout on To emphasize my point (and maybe help someone along), here is my implementation. I found it pretty hard to implement it without race conditions. |
Hi!
We're using Node.js in a setting were we do blue-green deployments by changing DNS records. We are running into an issue using the http Agent' keepAlive setting:
When we switch over to a new version of a downstream service by changing the DNS record, the http connections of the upstream service continue pointing to the old version, and continue to do that almost indefinitely if there is enough continuous traffic on the service (which most of the time there is).
For that reason, we cannot use the keepAlive feature at the moment.
It would be helpful if we would be able to configure a maximum age for sockets, which is checked when the socket is drawn from the free pool, and if the age exceeds the maximum, the socket is closed and a new one is requested.
Is there a work around for this issue? I found that we could emit an 'agentRemove' event on the socket after a period of time, which is supposed to remove the socket from the agent, but I'm not sure if the socket then requires manual closing after that?
The text was updated successfully, but these errors were encountered: