-
Notifications
You must be signed in to change notification settings - Fork 728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure max open sockets and max idle sockets across all nodes #1740
Comments
Some technical observations: A single HTTP Agent is capable of handling multiple sockets pools, one per upstream target. On the other hand, undici's Pool (used by UndiciConnection) is bound to a single upstream target (aka a single node), and holds a pool of Client connections to that origin. Even though undici's Pool allows limiting the maximum number of connections, AFAICT this limit cannot be set globally across all nodes at undici level. Thus, if we want to define a global limit on open sockets:
|
This issue is stale because it has been open 90 days with no activity. Remove the |
This issue is stale because it has been open 90 days with no activity. Remove the |
This issue is stale because it has been open 90 days with no activity. Remove the |
This is blocking Kibana from adopting the undici transport elastic/kibana#116087 |
Good to know. 👍 I'll do my best to prioritize it for 8.16. |
Some notes so far: I looked at Undici's I also tried to see if it could be enforced at the So, I have some things to think about here. The work will continue! |
I spent the day building a I need to move on to some other priorities for a bit, but this is something I want to revisit soon and see if there are other possible implementations I'm overlooking. |
Undici has an open issue to support configuring max sockets. I'm going to see if I can make a contribution there to get that functionality included. |
🚀 Feature Proposal
It should be possible to configure the client to control the max open sockets and max idle sockets across all nodes. In addition, the client should expose diagnostic information about all the open/idle sockets for observability.
Motivation
We want to have more control over the amount of open sockets Kibana creates when using elasticsearch-js.
It seems like by default elasticsearch-js will create an agent for each node:
https://github.com/elastic/elasticsearch-js/blob/8.2/src/client.ts#L231
https://github.com/elastic/elastic-transport-js/blob/8.2/src/pool/BaseConnectionPool.ts#L154
https://github.com/elastic/elastic-transport-js/blob/8.2/src/connection/HttpConnection.ts#L83
https://github.com/elastic/elastic-transport-js/blob/8.2/src/connection/UndiciConnection.ts#L113
While it's possible to specify the option
agent: () => http.Agent
which can return a single agent instance for use withHttpConnection
it doesn't seem possible to use a single Undici pool for all nodes.As a result, it's not possible to configure the maximum open and idle sockets across all connections/nodes in a way that's compatible with both
HttpConnection
andUndiciConnection
.We seem to be doing round robin load balancing across all nodes:
https://github.com/elastic/elastic-transport-js/blob/81316b1e0d01fadf7ada678bb440af56c6f74f4d/src/Transport.ts#L236
But because nodes don't share a connection pool, it seems to diminish the value of the WeightedPool. If a Node goes down the client will still choose that Node in a round-robin fashion sending 1/N requests to the dead node.WeightedPool ignores theselector
parameter togetConnection
The text was updated successfully, but these errors were encountered: