Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http/2 send_timeout triggered but not in http/1 #3189

Closed
Cactusbone opened this issue Jan 13, 2020 · 6 comments · Fixed by #3201
Closed

http/2 send_timeout triggered but not in http/1 #3189

Cactusbone opened this issue Jan 13, 2020 · 6 comments · Fixed by #3201

Comments

@Cactusbone
Copy link

Cactusbone commented Jan 13, 2020

Using http/1 and beresp.do_stream = true, we can take as long as we want to send data,
but after switching to http/2, the send_timeout is triggered and connection closes.

Expected Behavior

Either connection should close using http/1 to honor send_timeout configuration, or it should not close in http/2.

Current Behavior

Connection is closed after send_timeout using http/2, but not http/1, both with beresp.do_stream = true, beresp.do_gzip = false and beresp.ttl = 0s;

Possible Solution

Improve docs to explain http/2 behaviour
Ability to configure send_timeout in VCL will solve problem (#2983)

Steps to Reproduce

  1. have a backend which send data regularly, taking a high time, for exemple, we sent a dot every second for 700seconds (to reach the default 600s timeout)
  2. use http/2 to call the backend. we used haproxy in tcp mode for ssl termination
  3. wait until the timeout, connection get closed.
  4. using http/1, connection is not closed

Context

We're sending a file to client, generating it on the fly. Right now we increased timeout to 4 hours so it's working, but we might be keeping idle connections for naught.
I must stress out that connection is not closed using http/1, so we're sending enough data to keep connection alive

Your Environment

  • Version used: 6.3.1
  • Operating System and version: docker image
  • Source of binary packages used (if any) "official" docker image 6.3.1
    Running behind a haproxy 7.5 in proxy mode
@nigoroll
Copy link
Member

@Cactusbone your report comes as a surprise because we got test cases for HTTP/1 send_timeout
Can you please double check if this issue has been diagnosed correctly and provide varnishlog evidence to us? A reproducible test case would be ideal.

@Cactusbone
Copy link
Author

Cactusbone commented Jan 13, 2020

Using a default varnish installation on ubuntu 18.04, with a nodejs server, connection is not closed in http/1

apt-get install varnish nodejs

Nodejs server simply sends a dot every second for 700s, then ends https request properly.

server.js in node.js :

const http = require('http');
const delay = time => new Promise(res => setTimeout(res, time));

const requestListener = async function (req, res) {
    res.writeHead(200);
    for (let time = 0; time < 700; time++) {
        res.write(".");
        await delay(1000);
    }
    res.end();
}

const server = http.createServer(requestListener);
server.listen(8080);

node server.js &

curl localhost:6081 => no timeout after 600s

I've also tried using -p send_timeout=30 and it was not closed after 30s either.

using -p send_timeout=30 and -p feature=+http2 and using curl localhost:8080 --http2-prior-knowledge, connection get closed after 1min45sec (and not 30sec, but it does get closed)

vcl used :

vcl 4.0;

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

sub vcl_recv {
}

sub vcl_backend_response {
}

sub vcl_deliver {
}

Varnish logs:
http1: (send_timeout=30) - VARNISH 5.2

*   << Session  >> 65537
-   Begin          sess 0 HTTP/1
-   SessOpen       10.0.2.2 51137 a0 10.0.2.15 6081 1578997951.300162 24
-   Link           req 65538 rxreq
-   VSL            timeout
-   End            synth

*   << BeReq    >> 65539
-   Begin          bereq 65538 fetch
-   Timestamp      Start: 1578997951.310899 0.000000 0.000000
-   BereqMethod    GET
-   BereqURL       /
-   BereqProtocol  HTTP/1.1
-   BereqHeader    Host: localhost:8080
-   BereqHeader    User-Agent: curl/7.67.0
-   BereqHeader    Accept: */*
-   BereqHeader    X-Forwarded-For: 10.0.2.2
-   BereqHeader    Accept-Encoding: gzip
-   BereqHeader    X-Varnish: 65539
-   VCL_call       BACKEND_FETCH
-   VCL_return     fetch
-   BackendOpen    27 boot.default 127.0.0.1 8080 127.0.0.1 53494
-   BackendStart   127.0.0.1 8080
-   Timestamp      Bereq: 1578997951.311080 0.000180 0.000180
-   Timestamp      Beresp: 1578997951.311684 0.000785 0.000604
-   BerespProtocol HTTP/1.1
-   BerespStatus   200
-   BerespReason   OK
-   BerespHeader   Date: Tue, 14 Jan 2020 10:32:31 GMT
-   BerespHeader   Connection: keep-alive
-   BerespHeader   Transfer-Encoding: chunked
-   TTL            RFC 120 10 0 1578997951 1578997951 1578997951 0 0
-   VCL_call       BACKEND_RESPONSE
-   VCL_return     deliver
-   Storage        malloc s0
-   ObjProtocol    HTTP/1.1
-   ObjStatus      200
-   ObjReason      OK
-   ObjHeader      Date: Tue, 14 Jan 2020 10:32:31 GMT
-   Fetch_Body     2 chunked stream
-   BackendReuse   27 boot.default
-   Timestamp      BerespBody: 1578998652.198985 700.888085 700.887301
-   Length         700
-   BereqAcct      146 0 146 108 700 808
-   End

*   << Request  >> 65538
-   Begin          req 65537 rxreq
-   Timestamp      Start: 1578997951.310759 0.000000 0.000000
-   Timestamp      Req: 1578997951.310759 0.000000 0.000000
-   ReqStart       10.0.2.2 51137
-   ReqMethod      GET
-   ReqURL         /
-   ReqProtocol    HTTP/1.1
-   ReqHeader      Host: localhost:8080
-   ReqHeader      User-Agent: curl/7.67.0
-   ReqHeader      Accept: */*
-   ReqHeader      X-Forwarded-For: 10.0.2.2
-   VCL_call       RECV
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 65539 fetch
-   Timestamp      Fetch: 1578997951.311768 0.001009 0.001009
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Date: Tue, 14 Jan 2020 10:32:31 GMT
-   RespHeader     X-Varnish: 65538
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish (Varnish/5.2)
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1578997951.311778 0.001019 0.000010
-   RespHeader     Accept-Ranges: bytes
-   RespHeader     Transfer-Encoding: chunked
-   RespHeader     Connection: keep-alive
-   Timestamp      Resp: 1578998652.199115 700.888355 700.887336
-   ReqAcct        78 0 78 188 700 888
-   End

http2:(send_timeout=30) - VARNISH 5.2

*   << Request  >> 9
-   Begin          req 8 rxreq
-   Timestamp      Start: 1578999293.112272 0.000000 0.000000
-   ReqProtocol    HTTP/2.0
-   Timestamp      Req: 1578999293.112272 0.000000 0.000000
-   ReqStart       10.0.2.2 61798
-   ReqMethod      GET
-   ReqURL         /
-   ReqProtocol    HTTP/2.0
-   ReqHeader      scheme: http
-   ReqHeader      host: localhost:8080
-   ReqHeader      user-agent: curl/7.67.0
-   ReqHeader      accept: */*
-   ReqHeader      X-Forwarded-For: 10.0.2.2
-   VCL_call       RECV
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 98305 fetch
-   Timestamp      Fetch: 1578999293.113192 0.000921 0.000921
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Date: Tue, 14 Jan 2020 10:54:53 GMT
-   RespHeader     X-Varnish: 9
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish (Varnish/5.2)
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1578999293.113202 0.000930 0.000010
-   RespHeader     Accept-Ranges: bytes
-   Debug          "HP {33, "date:", ""} <Date: Tue, 14 Jan 2020 10:54:53 GMT>"
-   Debug          "HP {21, "age:", ""} <Age: 0>"
-   Debug          "HP {60, "via:", ""} <Via: 1.1 varnish (Varnish/5.2)>"
-   Debug          "HP {18, "accept-ranges:", ""} <Accept-Ranges: bytes>"
-   Timestamp      Resp: 1578999398.250811 105.138540 105.137609
-   ReqAcct        30 0 30 86 105 191
-   End

*   << Session  >> 8
-   Begin          sess 0 HTTP/1
-   SessOpen       10.0.2.2 61798 a0 10.0.2.15 6081 1578999293.101338 25
-   Link           req 9 rxreq
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   H2TxHdr        000001000000000001
-   H2TxBody       2e
-   Debug          "H2: No frame (hs=-3)"
-   Debug          "H2 CLEANUP NO_ERROR"
-   Debug          "KILL st=1 state=5"
-   Debug          "ST 0 0"
-   Debug          "ST 1 5"
-   Debug          "KILL st=1 state=6"
-   Debug          "ST 0 0"
-   ReqAcct        36 22 58 972 12 984
-   End

@Cactusbone
Copy link
Author

Cactusbone commented Jan 14, 2020

I've edited the previous post with varnish logs and configuration files to reproduce

I'll update varnish to latest version on my test environment :)

after upgrading to 6.3
http2 closes after 35sec - VARNISH 6.3

*   << Request  >> 2
-   Begin          req 1 rxreq
-   Timestamp      Start: 1579001069.472266 0.000000 0.000000
-   ReqProtocol    HTTP/2.0
-   Timestamp      Req: 1579001069.472266 0.000000 0.000000
-   VCL_use        boot
-   ReqStart       10.0.2.2 52114 a0
-   ReqMethod      GET
-   ReqURL         /
-   ReqProtocol    HTTP/2.0
-   ReqHeader      scheme: http
-   ReqHeader      host: localhost:8080
-   ReqHeader      user-agent: curl/7.67.0
-   ReqHeader      accept: */*
-   ReqHeader      X-Forwarded-For: 10.0.2.2
-   VCL_call       RECV
-   ReqUnset       host: localhost:8080
-   ReqHeader      host: localhost:8080
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 32769 fetch
-   Timestamp      Fetch: 1579001069.473102 0.000836 0.000836
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Date: Tue, 14 Jan 2020 11:24:29 GMT
-   RespHeader     X-Varnish: 2
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish (Varnish/6.3)
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1579001069.473109 0.000843 0.000006
-   Filters
-   RespHeader     Accept-Ranges: bytes
-   RespProtocol   HTTP/2.0
-   Timestamp      Resp: 1579001104.517371 35.045105 35.044262
-   ReqAcct        30 0 30 86 35 121
-   End

*   << Session  >> 1
-   Begin          sess 0 HTTP/1
-   SessOpen       10.0.2.2 52114 a0 10.0.2.15 6081 1579001069.461935 25
-   Link           req 2 rxreq
-   ReqAcct        36 22 58 342 6 348
-   SessClose      RX_JUNK 35.232
-   End

and long after closing:

*   << BeReq    >> 32769
-   Begin          bereq 2 fetch
-   VCL_use        boot
-   Timestamp      Start: 1579001069.472363 0.000000 0.000000
-   BereqMethod    GET
-   BereqURL       /
-   BereqProtocol  HTTP/2.0
-   BereqHeader    scheme: http
-   BereqHeader    user-agent: curl/7.67.0
-   BereqHeader    accept: */*
-   BereqHeader    X-Forwarded-For: 10.0.2.2
-   BereqHeader    host: localhost:8080
-   BereqHeader    Accept-Encoding: gzip
-   BereqProtocol  HTTP/1.1
-   BereqHeader    X-Varnish: 32769
-   VCL_call       BACKEND_FETCH
-   VCL_return     fetch
-   BackendOpen    27 default 127.0.0.1 8080 127.0.0.1 53566
-   Timestamp      Bereq: 1579001069.472484 0.000121 0.000121
-   Timestamp      Beresp: 1579001069.473028 0.000665 0.000543
-   BerespProtocol HTTP/1.1
-   BerespStatus   200
-   BerespReason   OK
-   BerespHeader   Date: Tue, 14 Jan 2020 11:24:29 GMT
-   BerespHeader   Connection: keep-alive
-   BerespHeader   Transfer-Encoding: chunked
-   TTL            RFC 120 10 0 1579001069 1579001069 1579001069 0 0 cacheable
-   VCL_call       BACKEND_RESPONSE
-   VCL_return     deliver
-   Filters
-   Storage        malloc s0
-   Fetch_Body     2 chunked stream
-   BackendReuse   27 default
-   Timestamp      BerespBody: 1579001770.400712 700.928348 700.927683
-   Length         700
-   BereqAcct      160 0 160 108 700 808
-   End

http1.1: (connection lasted the full 700s, well over the 30s timeout) - VARNISH 6.3

*   << Session  >> 65537
-   Begin          sess 0 HTTP/1
-   SessOpen       10.0.2.2 49191 a0 10.0.2.15 6081 1579002110.470414 25
-   Link           req 65538 rxreq
-   VSL            timeout
-   End            synth

*   << BeReq    >> 65539
-   Begin          bereq 65538 fetch
-   VCL_use        boot
-   Timestamp      Start: 1579002110.480830 0.000000 0.000000
-   BereqMethod    GET
-   BereqURL       /
-   BereqProtocol  HTTP/1.1
-   BereqHeader    User-Agent: curl/7.67.0
-   BereqHeader    Accept: */*
-   BereqHeader    X-Forwarded-For: 10.0.2.2
-   BereqHeader    host: localhost:8080
-   BereqHeader    Accept-Encoding: gzip
-   BereqHeader    X-Varnish: 65539
-   VCL_call       BACKEND_FETCH
-   VCL_return     fetch
-   BackendOpen    27 default 127.0.0.1 8080 127.0.0.1 53568
-   Timestamp      Bereq: 1579002110.480926 0.000096 0.000096
-   Timestamp      Beresp: 1579002110.481540 0.000709 0.000613
-   BerespProtocol HTTP/1.1
-   BerespStatus   200
-   BerespReason   OK
-   BerespHeader   Date: Tue, 14 Jan 2020 11:41:50 GMT
-   BerespHeader   Connection: keep-alive
-   BerespHeader   Transfer-Encoding: chunked
-   TTL            RFC 120 10 0 1579002110 1579002110 1579002110 0 0 cacheable
-   VCL_call       BACKEND_RESPONSE
-   VCL_return     deliver
-   Filters
-   Storage        malloc s0
-   Fetch_Body     2 chunked stream
-   BackendReuse   27 default
-   Timestamp      BerespBody: 1579002811.347195 700.866364 700.865655
-   Length         700
-   BereqAcct      146 0 146 108 700 808
-   End

*   << Request  >> 65538
-   Begin          req 65537 rxreq
-   Timestamp      Start: 1579002110.480751 0.000000 0.000000
-   Timestamp      Req: 1579002110.480751 0.000000 0.000000
-   VCL_use        boot
-   ReqStart       10.0.2.2 49191 a0
-   ReqMethod      GET
-   ReqURL         /
-   ReqProtocol    HTTP/1.1
-   ReqHeader      Host: localhost:8080
-   ReqHeader      User-Agent: curl/7.67.0
-   ReqHeader      Accept: */*
-   ReqHeader      X-Forwarded-For: 10.0.2.2
-   VCL_call       RECV
-   ReqUnset       Host: localhost:8080
-   ReqHeader      host: localhost:8080
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 65539 fetch
-   Timestamp      Fetch: 1579002110.481617 0.000866 0.000866
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Date: Tue, 14 Jan 2020 11:41:50 GMT
-   RespHeader     X-Varnish: 65538
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish (Varnish/6.3)
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1579002110.481624 0.000873 0.000006
-   Filters
-   RespHeader     Accept-Ranges: bytes
-   RespHeader     Connection: keep-alive
-   RespHeader     Transfer-Encoding: chunked
-   Timestamp      Resp: 1579002811.347431 700.866680 700.865807
-   ReqAcct        78 0 78 188 5605 5793
-   End

@nigoroll
Copy link
Member

@Cactusbone thank you very much for the detailed information. We should be able to work with that.
I suspect that lowering idle_send_timeout could help you, but I have not got around to looking into the details.

@Cactusbone
Copy link
Author

@nigoroll idle timeout is set to 30sec by default, but we're sending data every second, so I believe it's not triggered at all. For now we increased send_timeout to 4h, which seems enough for our use case.

@nigoroll
Copy link
Member

I got a simple VTC reproducing the issue now. Expect a fix soon.

nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Jan 27, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Jan 27, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Feb 3, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Feb 3, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Feb 4, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
@nigoroll nigoroll removed the a=has_PR label Feb 6, 2020
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Feb 27, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Mar 3, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Mar 3, 2020
As @Dridi and myself concluded, the send_timeout had no effect on
backend connections anyway because we never set SO_SNDTIMEO (aka
idle_send_timeout on the client side) on backend connections.

With the previous commit, we fix the send_timeout on the client side and
thus would also enable it for "dripping" writes on the backend side.

To preserve existing behavior for the time being, we explicitly disable
the timeout (actually deadline) on the backend side. There is ongoing
work in progress to rework all of our timeouts for 7.x.

Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false
for v1l->deadline == NaN

Ref varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Mar 3, 2020
As @Dridi and myself concluded, the send_timeout had no effect on
backend connections anyway because we never set SO_SNDTIMEO (aka
idle_send_timeout on the client side) on backend connections.

With the next commit, we will fix the send_timeout on the client side and
thus would also enable it for "dripping" writes on the backend side.

To preserve existing behavior for the time being, we explicitly disable
the timeout (actually deadline) on the backend side. There is ongoing
work in progress to rework all of our timeouts for 7.x.

Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false
for v1l->deadline == NaN

Ref varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Mar 3, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189
nigoroll added a commit to nigoroll/varnish-cache that referenced this issue Mar 3, 2020
As @Dridi and myself concluded, the send_timeout had no effect on
backend connections anyway because we never set SO_SNDTIMEO (aka
idle_send_timeout on the client side) on backend connections.

With the next commit, we will fix the send_timeout on the client side and
thus would also enable it for "dripping" writes on the backend side.

To preserve existing behavior for the time being, we explicitly disable
the timeout (actually deadline) on the backend side. There is ongoing
work in progress to rework all of our timeouts for 7.x.

Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false
for v1l->deadline == NaN

Ref varnishcache#3189
rezan pushed a commit to rezan/varnish-cache that referenced this issue Jun 12, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189

 Conflicts:
	bin/varnishd/http1/cache_http1_line.c
	bin/varnishtest/tests/s00011.vtc
rezan pushed a commit that referenced this issue Jun 16, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes #3189

 Conflicts:
	bin/varnishd/http1/cache_http1_line.c
	bin/varnishtest/tests/s00011.vtc
hermunn pushed a commit to hermunn/varnish-cache that referenced this issue Nov 3, 2020
Previously, we only checked `v1l->deadline` (which gets initialized
from the `send_timeout` session attribute or parameter) only for short
writes, so for successful "dripping" http1 writes (streaming from a
backend busy object with small chunks), we did not respect the
timeout.

This patch restructures `V1L_Flush()` to always check the deadline
before a write by turning a `while() { ... }` into a `do { ... }
while` with the same continuation criteria: `while (i != v1l->liov)`
is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
== EWOULDBLOCK)` is kept to retry short writes.

This also reduces the `writev()` call sites to one.

Fixes varnishcache#3189

 Conflicts:
	bin/varnishd/http1/cache_http1_line.c
	bin/varnishtest/tests/s00011.vtc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants