-
Notifications
You must be signed in to change notification settings - Fork 574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection closing in NIO mode interfers with IO threads #380
Comments
acogoluegnes
added a commit
that referenced
this issue
Aug 14, 2018
When sharing the same executor for NIO and connection closing, all the threads of the pool can be busy recovering connections, leaving no thread left for IO. This commit add a new executor service to the NIO mode to submit all the connection closing to. This is useful when an application maintains dozens or hundreds of connections and suffers massive connection lost. Hundreds of connection closing tasks can be submitted very quickly, so controlling the number of threads and leaving some threads available for IO is critical. If an application maintain just a few connections and can deal with the creation of a few threads, using the new executor isn't necessary. Fixes #380
acogoluegnes
added a commit
that referenced
this issue
Aug 16, 2018
When sharing the same executor for NIO and connection closing, all the threads of the pool can be busy recovering connections, leaving no thread left for IO. This commit add a new executor service to the NIO mode to submit all the connection closing to. This is useful when an application maintains dozens or hundreds of connections and suffers massive connection lost. Hundreds of connection closing tasks can be submitted very quickly, so controlling the number of threads and leaving some threads available for IO is critical. If an application maintain just a few connections and can deal with the creation of a few threads, using the new executor isn't necessary. Fixes #380 (cherry picked from commit 0a7e7e5)
acogoluegnes
added a commit
that referenced
this issue
Aug 16, 2018
When sharing the same executor for NIO and connection closing, all the threads of the pool can be busy recovering connections, leaving no thread left for IO. This commit add a new executor service to the NIO mode to submit all the connection closing to. This is useful when an application maintains dozens or hundreds of connections and suffers massive connection lost. Hundreds of connection closing tasks can be submitted very quickly, so controlling the number of threads and leaving some threads available for IO is critical. If an application maintain just a few connections and can deal with the creation of a few threads, using the new executor isn't necessary. Fixes #380 (cherry picked from commit 0a7e7e5)
acogoluegnes
added a commit
that referenced
this issue
Aug 16, 2018
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The NIO mode uses an optional executor service for both the IO loop and asynchronous connection closing. Connection recovery can then occur in the same thread as the connection closing has been dispatched to, so the new connection IO is then assigned to one of the IO threads. This works if the executor service has enough threads.
In case of massive disconnection, all the NIO tasks are terminated (because they're not used for about 1 second), the connection closing use threads from the NIO executor service, and connection recovery triggers the re-creation of some NIO threads to dispatch the IO of the new connection. If there are enough connection, e.g. 20, and the service executor size is small enough, e.g. 10, all the threads of the service executor are busy closing the connections, and there are no threads available for the NIO.
The service executor size can be increased, but we lose the benefit of NIO as we'll need as many threads as connections in case of re-connection storm.
A solution could be to use a dedicated executor service to dispatch connection closing. It would another option in
NioParams
. A typical setting would be to have a low-sized thread pool that could enqueue requests.The NIO mode could also use a new setting to set the idle time of NIO tasks: the current value is short (1 second) and could be made longer to avoid thread churn.
The text was updated successfully, but these errors were encountered: