New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http/2 send_timeout triggered but not in http/1 #3189
Comments
@Cactusbone your report comes as a surprise because we got test cases for HTTP/1 |
Using a default varnish installation on ubuntu 18.04, with a nodejs server, connection is not closed in http/1 apt-get install varnish nodejs Nodejs server simply sends a dot every second for 700s, then ends https request properly. server.js in node.js : const http = require('http');
const delay = time => new Promise(res => setTimeout(res, time));
const requestListener = async function (req, res) {
res.writeHead(200);
for (let time = 0; time < 700; time++) {
res.write(".");
await delay(1000);
}
res.end();
}
const server = http.createServer(requestListener);
server.listen(8080);
I've also tried using using vcl used : vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
sub vcl_recv {
}
sub vcl_backend_response {
}
sub vcl_deliver {
} Varnish logs:
http2:(send_timeout=30) - VARNISH 5.2
|
I've edited the previous post with varnish logs and configuration files to reproduce I'll update varnish to latest version on my test environment :) after upgrading to 6.3
and long after closing:
http1.1: (connection lasted the full 700s, well over the 30s timeout) - VARNISH 6.3
|
@Cactusbone thank you very much for the detailed information. We should be able to work with that. |
@nigoroll idle timeout is set to 30sec by default, but we're sending data every second, so I believe it's not triggered at all. For now we increased send_timeout to 4h, which seems enough for our use case. |
I got a simple VTC reproducing the issue now. Expect a fix soon. |
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
As @Dridi and myself concluded, the send_timeout had no effect on backend connections anyway because we never set SO_SNDTIMEO (aka idle_send_timeout on the client side) on backend connections. With the previous commit, we fix the send_timeout on the client side and thus would also enable it for "dripping" writes on the backend side. To preserve existing behavior for the time being, we explicitly disable the timeout (actually deadline) on the backend side. There is ongoing work in progress to rework all of our timeouts for 7.x. Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false for v1l->deadline == NaN Ref varnishcache#3189
As @Dridi and myself concluded, the send_timeout had no effect on backend connections anyway because we never set SO_SNDTIMEO (aka idle_send_timeout on the client side) on backend connections. With the next commit, we will fix the send_timeout on the client side and thus would also enable it for "dripping" writes on the backend side. To preserve existing behavior for the time being, we explicitly disable the timeout (actually deadline) on the backend side. There is ongoing work in progress to rework all of our timeouts for 7.x. Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false for v1l->deadline == NaN Ref varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189
As @Dridi and myself concluded, the send_timeout had no effect on backend connections anyway because we never set SO_SNDTIMEO (aka idle_send_timeout on the client side) on backend connections. With the next commit, we will fix the send_timeout on the client side and thus would also enable it for "dripping" writes on the backend side. To preserve existing behavior for the time being, we explicitly disable the timeout (actually deadline) on the backend side. There is ongoing work in progress to rework all of our timeouts for 7.x. Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false for v1l->deadline == NaN Ref varnishcache#3189
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189 Conflicts: bin/varnishd/http1/cache_http1_line.c bin/varnishtest/tests/s00011.vtc
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes #3189 Conflicts: bin/varnishd/http1/cache_http1_line.c bin/varnishtest/tests/s00011.vtc
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes varnishcache#3189 Conflicts: bin/varnishd/http1/cache_http1_line.c bin/varnishtest/tests/s00011.vtc
Using http/1 and
beresp.do_stream = true
, we can take as long as we want to send data,but after switching to http/2, the send_timeout is triggered and connection closes.
Expected Behavior
Either connection should close using http/1 to honor send_timeout configuration, or it should not close in http/2.
Current Behavior
Connection is closed after send_timeout using http/2, but not http/1, both with
beresp.do_stream = true
,beresp.do_gzip = false
andberesp.ttl = 0s;
Possible Solution
Improve docs to explain http/2 behaviour
Ability to configure send_timeout in VCL will solve problem (#2983)
Steps to Reproduce
Context
We're sending a file to client, generating it on the fly. Right now we increased timeout to 4 hours so it's working, but we might be keeping idle connections for naught.
I must stress out that connection is not closed using http/1, so we're sending enough data to keep connection alive
Your Environment
Running behind a haproxy 7.5 in proxy mode
The text was updated successfully, but these errors were encountered: