-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Socket leakage on tls handshake timeout in https protocol #3251
Comments
You will need to set the |
Yes, I agree it is incorrect and I've changed it to default 5000, but it did not help - I add Connection: close to every response and there is general timeout of 10000 |
I don't think Fastify or Node.js are doing anything with the Could you upload a set of scripts to completely reproduce the problem. |
Found the source of the leakage. For some reason https server does not close client connections when timeout is reached, just emits clientError event. Here is an example: const fastify = require('fastify');
const good = fastify({ connectionTimeout: 10000 })
good.listen(12121)
const bad = fastify({ connectionTimeout: 10000, https: {handshakeTimeout: 10000} })
bad.listen(12122) then:
|
This code: const fastify = require('fastify');
const good = fastify({ connectionTimeout: 10000 })
good.listen(12121)
const bad = fastify({ connectionTimeout: 10000, https: { handshakeTimeout: 10000 } })
bad.server.on('clientError', (err, socket) => {
console.error(err);
socket.end();
})
bad.listen(12122) returns output
but lsof still shows established connection:
|
Calling socket.destroy() instead of socket.end() in onClientError solves the problem. I have no idea why https behaves as it behaves but I think it would be great either to update documentation or add clientError handler to the server |
I think we already have a clientError handler attached to the server. The default one only call |
@climba03003 yes, you are right. It this use case |
monkey patching fastify and removing onClientError handler also solves the problem |
Was not possible to set a custom clientErrorHandler?
This seems good 👍🏽 Great spot! |
here is my new clientErrorHandler: export const clientErrorHandler = (err: ServerError | null, socket: Socket): void => {
logger.error({ message: err?.message, destroyed: socket.destroyed, writable: socket.writable });
if (socket.destroyed) {
return;
} else if (socket.writable) {
socket.end();
} else {
socket.destroy();
}
}; Instead of comparing error.type to ECONNRESET I check for socket.destroyed |
1. Added check for writable state of the socket before trying to end request gracefully 2. Node.js updated their official documentation, so I changed link to the issue with a link to their official documentation Fixes fastify#3251
More investigation. |
Prerequisites
Fastify version
3.19.2
Plugin version
3.0.0
Node.js version
16.6.1
Operating system
Linux
Operating system version (i.e. 20.04, 11.3, 10)
alpine-3.13.5
Description
I've switched my code from koa to fastify on a single region. After switch I've started to get EMFILE error when uptime reaches 24 hours.
To debug the issue I've added running
lsof
every 2 minutes. This is the response:When I've tried to track a history of a single socket from different lsof commands I got this:
As you can see, this zombie socket (with id 999238) was initially a https socket, handled by fastify. It does not include all states (ESTABLISHED and so on) because snapshot was saved every 2 minutes
First time I've blamed proxywrap module (which worked for koa as well), but removing proxywrap did not change anything
Steps to Reproduce
I don't have yet steps to reproduce. All my attempts to reproduce did not succeed (in production it is still an issue). I think it is related with very slow EDGE mobile connection, but it is a guess
Here is my app config:
Expected Behavior
All sockets needs to be closed
The text was updated successfully, but these errors were encountered: