From 03922069d8ac48debfa2ad05e3c6fc376d463d6a Mon Sep 17 00:00:00 2001 From: Damien Arrachequesne Date: Mon, 25 Mar 2024 08:32:04 +0100 Subject: [PATCH] docs: add note about nginx proxy_read_timeout Related: https://github.com/socketio/socket.io/issues/3025 --- .../01-Documentation/troubleshooting.md | 14 ++++++++++++- .../02-Server/behind-a-reverse-proxy.md | 12 ++++++++--- .../02-Server/using-multiple-nodes.md | 20 ++++++++++++------- 3 files changed, 35 insertions(+), 11 deletions(-) diff --git a/docs/categories/01-Documentation/troubleshooting.md b/docs/categories/01-Documentation/troubleshooting.md index 9db4faf5..a3db0618 100644 --- a/docs/categories/01-Documentation/troubleshooting.md +++ b/docs/categories/01-Documentation/troubleshooting.md @@ -340,6 +340,18 @@ The possible reasons are listed [here](../03-Client/client-socket-instance.md#di ### Possible explanations +#### Something between the server and the client closes the connection + +If the disconnection happens at a regular interval, this might indicate that something between the server and the client is not properly configured and closes the connection: + +- nginx + +The value of nginx's [`proxy_read_timeout`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout) (60 seconds by default) must be bigger than Socket.IO's [`pingInterval + pingTimeout`](../../server-options.md#pinginterval) (45 seconds by default), else it will forcefully close the connection if no data is sent after the given delay and the client will get a "transport close" error. + +- Apache HTTP Server + +The value of httpd's [`ProxyTimeout`](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxytimeout) (60 seconds by default) must be bigger than Socket.IO's [`pingInterval + pingTimeout`](../../server-options.md#pinginterval) (45 seconds by default), else it will forcefully close the connection if no data is sent after the given delay and the client will get a "transport close" error. + #### The browser tab was minimized and heartbeat has failed When a browser tab is not in focus, some browsers (like [Chrome](https://developer.chrome.com/blog/timer-throttling-in-chrome-88/#intensive-throttling)) throttle JavaScript timers, which could lead to a disconnection by ping timeout **in Socket.IO v2**, as the heartbeat mechanism relied on `setTimeout` function on the client side. @@ -426,7 +438,7 @@ io.on("connection", (socket) => { #### A proxy in front of your servers does not accept the WebSocket connection -If a proxy like NginX or Apache HTTPD is not properly configured to accept WebSocket connections, then you might get a `TRANSPORT_MISMATCH` error: +If a proxy like nginx or Apache HTTPD is not properly configured to accept WebSocket connections, then you might get a `TRANSPORT_MISMATCH` error: ```js io.engine.on("connection_error", (err) => { diff --git a/docs/categories/02-Server/behind-a-reverse-proxy.md b/docs/categories/02-Server/behind-a-reverse-proxy.md index def209e8..688734b3 100644 --- a/docs/categories/02-Server/behind-a-reverse-proxy.md +++ b/docs/categories/02-Server/behind-a-reverse-proxy.md @@ -6,14 +6,14 @@ slug: /reverse-proxy/ You will find below the configuration needed for deploying a Socket.IO server behind a reverse-proxy solution, such as: -- [NginX](#nginx) +- [nginx](#nginx) - [Apache HTTPD](#apache-httpd) - [Node.js `http-proxy`](#nodejs-http-proxy) - [Caddy 2](#caddy-2) In a multi-server setup, please check the documentation [here](using-multiple-nodes.md). -## NginX +## nginx Content of `/etc/nginx/nginx.conf`: @@ -41,7 +41,13 @@ Related: - [proxy_pass documentation](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass) - [configuration in a multi-server setup](using-multiple-nodes.md#nginx-configuration) -If you only want to forward the Socket.IO requests (for example when NginX handles the static content): +:::caution + +The value of nginx's [`proxy_read_timeout`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout) (60 seconds by default) must be bigger than Socket.IO's [`pingInterval + pingTimeout`](../../server-options.md#pinginterval) (45 seconds by default), else nginx will forcefully close the connection if no data is sent after the given delay and the client will get a "transport close" error. + +::: + +If you only want to forward the Socket.IO requests (for example when nginx handles the static content): ``` http { diff --git a/docs/categories/02-Server/using-multiple-nodes.md b/docs/categories/02-Server/using-multiple-nodes.md index 6075abd0..c4d8efbb 100644 --- a/docs/categories/02-Server/using-multiple-nodes.md +++ b/docs/categories/02-Server/using-multiple-nodes.md @@ -55,8 +55,8 @@ To achieve sticky-session, there are two main solutions: You will find below some examples with common load-balancing solutions: -- [NginX](#nginx-configuration) (IP-based) -- [NginX Ingress (Kubernetes)](#nginx-ingress-kubernetes) (IP-based) +- [nginx](#nginx-configuration) (IP-based) +- [nginx Ingress (Kubernetes)](#nginx-ingress-kubernetes) (IP-based) - [Apache HTTPD](#apache-httpd-configuration) (cookie-based) - [HAProxy](#haproxy-configuration) (cookie-based) - [Traefik](#traefik) (cookie-based) @@ -94,7 +94,7 @@ const socket = io("https://server-domain.com", { Without it, the cookie will not be sent by the browser and you will experience HTTP 400 "Session ID unknown" responses. More information [here](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials). -### NginX configuration +### nginx configuration Within the `http { }` section of your `nginx.conf` file, you can declare a `upstream` section with a list of Socket.IO process you want to balance load between: @@ -134,14 +134,20 @@ http { Notice the `hash` instruction that indicates the connections will be sticky. -Make sure you also configure `worker_processes` in the topmost level to indicate how many workers NginX should use. You might also want to look into tweaking the `worker_connections` setting within the `events { }` block. +Make sure you also configure `worker_processes` in the topmost level to indicate how many workers nginx should use. You might also want to look into tweaking the `worker_connections` setting within the `events { }` block. Links: - [Example](https://github.com/socketio/socket.io/tree/main/examples/cluster-nginx) -- [NginX Documentation](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) +- [nginx Documentation](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) -### NginX Ingress (Kubernetes) +:::caution + +The value of nginx's [`proxy_read_timeout`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout) (60 seconds by default) must be bigger than Socket.IO's [`pingInterval + pingTimeout`](../../server-options.md#pinginterval) (45 seconds by default), else nginx will forcefully close the connection if no data is sent after the given delay and the client will get a "transport close" error. + +::: + +### nginx Ingress (Kubernetes) Within the `annotations` section of your Ingress configuration, you can declare an upstream hashing based on the client's IP address, so that the Ingress controller always assigns the requests from a given IP address to the same pod: @@ -299,7 +305,7 @@ Links: ### Using Node.js Cluster -Just like NginX, Node.js comes with built-in clustering support through the `cluster` module. +Just like nginx, Node.js comes with built-in clustering support through the `cluster` module. There are several solutions, depending on your use case: