Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Content-Encoding header missing with reverse-proxied chunked response #4314

Closed
ueffel opened this issue Aug 30, 2021 · 7 comments · Fixed by #4318
Closed

Content-Encoding header missing with reverse-proxied chunked response #4314

ueffel opened this issue Aug 30, 2021 · 7 comments · Fixed by #4318
Labels
bug 🐞 Something isn't working
Milestone

Comments

@ueffel
Copy link
Contributor

ueffel commented Aug 30, 2021

Affected version: 2.4.4

I noticed that the Content-Encoding header is missing on reverse proxied responses when they are big enough so they are transferred in chunks. The response is encoded but the header isn't sent, which results in garbled mess.

Example Caddyfile:

http://:80 {
	encode zstd gzip
	reverse_proxy 127.0.0.1:8080
}

http://:8080 {
	respond / "Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
Long enough response, that would get compressed by some of the encoder modules and transferred chunked.
"
	header Content-Type text/plain;charset=utf8
}

Request / Response:

$ curl -v --compressed -o /dev/null -s localhost
*   Trying 127.0.0.1:80...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.78.0
> Accept: */*
> Accept-Encoding: deflate, gzip, br, zstd
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain;charset=utf8
< Date: Mon, 30 Aug 2021 22:38:35 GMT
< Server: Caddy
< Server: Caddy
< Transfer-Encoding: chunked
<
{ [113 bytes data]
* Connection #0 to host localhost left intact

If the respond directive is one line shorter the Request / Response is ok:

 curl -v --compressed -o /dev/null -s localhost

*   Trying 127.0.0.1:80...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.78.0
> Accept: */*
> Accept-Encoding: deflate, gzip, br, zstd
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Encoding: zstd
< Content-Type: text/plain;charset=utf8
< Date: Mon, 30 Aug 2021 22:38:08 GMT
< Server: Caddy
< Server: Caddy
< Vary: Accept-Encoding
< Content-Length: 102
<
{ [102 bytes data]
* Connection #0 to host localhost left intact

I think responsible are the following lines:
e6c29ce#diff-287933cf508d4f4a33e4e0ac918639f44b9967505681131a29c82d608c2dfe57R152-R156
If comment them out, everything is fine.

Other headers don't seems to be affected, but I'm not sure.

Maybe related to #3997

For the record with version 2.4.3 everything is fine too.

@mholt
Copy link
Member

mholt commented Aug 30, 2021

I think more related to #4266. /cc @francislavoie

@BBaoVanC
Copy link

I experienced the same on Debian 11 using the official Caddy repo. I had to downgrade to 2.4.3.

@francislavoie
Copy link
Member

Thanks for the replication case @ueffel, very appreciated.

Running with debug mode on, we do see this warning:

2021/08/31 01:40:14.992	DEBUG	http.stdlib	http: superfluous response.WriteHeader call from github.com/caddyserver/caddy/v2/modules/caddyhttp.(*responseRecorder).WriteHeader (responsewriter.go:168)

So I think the bug is actually on that line, but it was only revealed from the recent changes to stdlib which cause immediate flushing in certain cases:

// if not buffered, immediately write header
if rr.stream {
rr.ResponseWriterWrapper.WriteHeader(rr.statusCode)
}

@francislavoie francislavoie added the bug 🐞 Something isn't working label Aug 31, 2021
@francislavoie francislavoie added this to the v2.4.5 milestone Aug 31, 2021
@francislavoie
Copy link
Member

Gist of it is multiple calls to WriteHeader are happening, because we're wrapping ResponseWriter multiple times all the wrapped response writers don't tell eachother if they've written the status header properly.

The change in e6c29ce#diff-287933cf508d4f4a33e4e0ac918639f44b9967505681131a29c82d608c2dfe57R152-R156 was necessary to fix a different bug (and was taken from the Go stdlib) but fixing that caused a different bug elsewhere.

I did a bit of tracing with fmt.Printf() and debug.StackTrace() to try and track it down, but I don't have a fix yet, and I'm not sure I'll be able to figure out the right way to fix it.

Key places to look at and add some debug statements to track it down:

  • encode.go's WriteHeader and Write

  • reverseproxy.go's rw.WriteHeader(res.StatusCode)

  • responsewriter.go's WriteHeader and Write, and the call to rr.ResponseWriterWrapper.Write

  • Add some prints before/after the write calls throughout to see the order of operations.

  • Turn on debug mode to see the error from stdlib that otherwise is suppressed.

  • Split the Caddyfile from OP into two running Caddy instances so that the writes from the reverse_proxy instance can be isolated from the ones in the respond site.

@ueffel
Copy link
Contributor Author

ueffel commented Aug 31, 2021

I think a premature Flush is the problem. If the encode module just passes it through to the underlying ResponseWriter the headers and status code will be written. But the encode module delays the decision if it has to encode until the first Write call (probably because of the MinimumLength). So if a Flush already has come through, it will be too late to add the Content-Encoding header.

I think ignoring all Flush calls until the first Write of the encode module seems reasonable.

diff --git a/modules/caddyhttp/encode/encode.go b/modules/caddyhttp/encode/encode.go
index a7abb3f90d3ec510249415de71cb3a697bc2632b..e7294f35f0fe4f94663c4e06cbb2ff6e895c09c6 100644
--- a/modules/caddyhttp/encode/encode.go
+++ b/modules/caddyhttp/encode/encode.go
@@ -182,6 +182,7 @@ type responseWriter struct {
 	buf          *bytes.Buffer
 	config       *Encode
 	statusCode   int
+	wroteHeader  bool
 }
 
 // WriteHeader stores the status to write when the time comes
@@ -195,6 +196,17 @@ func (enc *Encode) Match(rw *responseWriter) bool {
 	return enc.Matcher.Match(rw.statusCode, rw.Header())
 }
 
+func (rw *responseWriter) Flush() {
+	if !rw.wroteHeader {
+		// flushing the underlying ResponseWriter will write header and status code
+		// but we need to delay that until we can determine if we must encode and
+		// therefore send the Content-Encoding header
+		// this happens in the first call to rw.Write
+		return
+	}
+	rw.ResponseWriterWrapper.Flush()
+}
+
 // Write writes to the response. If the response qualifies,
 // it is encoded using the encoder, which is initialized
 // if not done so already.
@@ -225,6 +237,7 @@ func (rw *responseWriter) Write(p []byte) (int, error) {
 	if rw.statusCode > 0 {
 		rw.ResponseWriter.WriteHeader(rw.statusCode)
 		rw.statusCode = 0
+		rw.wroteHeader = true
 	}
 
 	switch {
@@ -271,6 +284,7 @@ func (rw *responseWriter) Close() error {
 		// that rely on If-None-Match, for example
 		rw.ResponseWriter.WriteHeader(rw.statusCode)
 		rw.statusCode = 0
+		rw.wroteHeader = true
 	}
 	if rw.w != nil {
 		err2 := rw.w.Close()

@francislavoie
Copy link
Member

Yeah, that sounds right to me! Thanks for looking into it @ueffel, very much appreciated! If you're like to make a PR with that, that'd be great (and if you have time, an integration test to cover this would be great as well)

mholt added a commit that referenced this issue Aug 31, 2021
* encode: ignore flushing until after first write (fix #4314)

The first write will determine if encoding has to be done and will add an Content-Encoding. Until then Flushing has to be delayed so the Content-Encoding header can be added before headers and status code is written. (A passthrough flush would write header and status code)

* Update modules/caddyhttp/encode/encode.go

Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
@mholt
Copy link
Member

mholt commented Aug 31, 2021

Thank you for the investigation, both of you, and for the quick fix! Tricky tricky. Web servers are hard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐞 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants