content-encoding -- suspected intermittent double-gzip encoding on css/js #1362
Comments
trying to reproduce this by removing files from the cache per our offline discussion. First fetching which pulls in the file Go into the on-disk cache and remove the cached (gzipped) minified file and some of the files that make it up $ rm A.yellow.css+blue.css+big.css+bold.css,2CMcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css, yellow.css+blue.css+big.css+bold.css.pagespeed.cc.xo4He3_gYf.css, big.css, fetch the page/resource again, and it is still not corrupted. |
How are you doing the fetch? wget? What line? did you disable any L1 caches? On Tue, Jul 26, 2016 at 1:54 PM, Jeffrey Crowell notifications@github.com
|
Also be sure to set a breakpoint in RewriteDriver::Clone which should be During clone, do we have accept-encoding:gzip on this->request_headers and Setting a breakpoint on the transition from the nested reconstruction On Tue, Jul 26, 2016 at 1:58 PM, Joshua Marantz jmarantz@google.com wrote:
|
I tried with firefox, chrome, and curl (not sending the accept-encoding header for curl). (as well a combination of them). just or visiting the page in one of the browsers. This is testing the master config in the debug.conf.template, which doesn't have L1 cache enabled. One thing that makes me thing that it might not be this issue is that the comment here |
@jmarantz yes, both the clone and original
|
That shouldn't matter, though, since filters should be using On Tue, Jul 26, 2016 at 2:20 PM, Jeffrey Crowell
|
Right; my theory is that we should strip the accept-encoding:gzip in However it'd be good to convince ourselves that this is the problem, and On Tue, Jul 26, 2016 at 2:25 PM, Maks Orlovich notifications@github.com
|
Ok, as promised a simple test case to reproduce the problem. Apache
Pagespeed
PHP Script (test.php)<?php
header("Connection: close\r\n");
header("Cache-Control: max-age=86400");
header("Pragma: ", true);
header("Content-Type: text/javascript");
header("Expires: " . gmdate("D, d M Y H:i:s", time() + 86400) . " GMT");
$data = str_pad("", 2000, uniqid());
$output = "\x1f\x8b\x08\x00\x00\x00\x00\x00" .
substr(gzcompress($data, 2), 0, -4) .
pack('V', crc32($data)) .
pack('V', mb_strlen($data, "latin1"));
header("Content-Encoding: gzip\r\n");
header("Content-Length: " . mb_strlen($output, "latin1"));
echo $output; It does not matter whether you're using FastCGI or mod_php for this script. DescriptionThe problem happens as soon as you're enabling the Apache module with First request:
Second and all further requests:
Let me know if I can do anything further for you. |
@crowell sorry, forgot to mention you :) #1362 (comment) |
This is a major issue for a lot of other systems gzipping files, the e-learning system Moodle for example. |
Bernhard: thanks for the repro! I'm not 100% sure this is the same bug,
I think the issue we were seeing is that we were sending content-encoding, Lennart: I'm not sure I follow; you are just saying that systems other than On Wed, Jul 27, 2016 at 7:30 AM, Lennart Sauter notifications@github.com
|
@jmarantz no I'm saying this bug does cause misfunctions in Moodle and other PHP software aswell (beside Pimcore). Latest Moodle 3.X does also not work with pagespeed because the content is double gzipped. |
I don't think it's double gzipped (see my description above), the problem is only a missing header. :) |
I can reproduce it here. thanks! Trying to find why the content-encoding header seems to disappear in this case now, hope to have a fix soon. |
Bernard -- the point I'm making is that this bug references some users who Lennart: thanks!! I wonder if you could give more details on how to On Wed, Jul 27, 2016 at 10:39 AM, Jeffrey Crowell notifications@github.com
|
@jmarantz I think Bernhards test case does include the issue for Moodle, Pimcore and other CMS (one of our drupal installations has same issues after updating pagespeed) just fine. They both output gzipped css/js files and display strange chars if mod_pagespeed is active (and not completely disabled with a2dismod). |
@jmarantz investigating with the debugger, it does seem that it is "double" gzipped. It gets extracted the first time in |
Thanks everyone for the repro. Hopefully we can get this resolved quickly and put out a fix with our next patch. |
I am testing out mod_pagespeed (1.11.33.2-0) behind aws elb with a fairly default core filters test server and occasionally getting "ERR_CONTENT_DECODING_FAILED" on one of my css files near startup. I see "double" gzipped in the above comments, but for me when the problem happens, the css file has header "Content-Encoding:gzip" but the Content-Length header is the full file size (134kb), not the zipped size which makes me think there is no zipping going on. This file also says "max-age=300,private" when all the other css files are marked public. Next refresh the file works in the browser: contains the compressed size and marked public. I originally got this using combine css filter and when I disable combine css, I consistently get it only on my one big file. I have no other interesting modules that I know of running. I noticed this problem when I was using CloudFront in front of the static file but KeyCdn also did the same thing. The file is served from my application server PlayFramework, not using ModPagespeedLoadFromFile. I am new to pagespeed and just guessing that because this only happens on my one big css file, if it misses the flush window it is being sent out incorrectly marked as gzip when it really isn't, Is this the same issue? |
Enalmada : I think this is indeed the bug, and the good news that I think we are getting more confident in a strategy to fix. Sorry your initial experience with PageSpeed ran into this trouble. Out of curiosity, does setting |
I have not seen the error since I last enabled the setting, thanks! Sorry for not trying that right away. |
I believe we are running into this issue as well on our CSS files intermittently. We are running 1.11.33.4-beta curl: (18) transfer closed with 313 bytes remaining to read Looks like the server is sending invalid Content-Length header and the file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn’t match the previously given size. nginx version: nginx/1.10.3 We have no easy way to reproduce this but I just happened to find this thread. Was there ever a fix for it? For now we have added
and will see if it happens again. |
Please also try updating to PageSpeed 1.12, which now our stable release.
…On Wed, Jul 26, 2017 at 3:19 PM, urifoox ***@***.***> wrote:
I believe we are running into this issue as well on our CSS files
intermittently. We are running 1.11.33.4-beta
curl: (18) transfer closed with 313 bytes remaining to read
Looks like the server is sending invalid Content-Length header and the
file transfer was shorter or larger than expected. This happens when the
server first reports an expected transfer size, and then delivers data that
doesn’t match the previously given size.
nginx version: nginx/1.10.3
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
built with OpenSSL 1.0.2j 26 Sep 2016
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
--group=nginx --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module --with-http_dav_module
--with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_random_index_module
--with-http_secure_link_module --with-http_stub_status_module
--with-http_auth_request_module --with-mail --with-mail_ssl_module
--with-file-aio --with-http_v2_module --with-cc-opt='-g -O2
-fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security' --with-ld-opt='-Wl,-Bsymbolic-functions
-Wl,-z,relro' --add-module=/root/ngx_pagespeed-release-1.11.33.4-beta
--with-ipv6
We have no easy way to reproduce this but I just happened to find this
thread. Was there ever a fix for it? For now we have added
pagespeed HttpCacheCompressionLevel 0;
and will see if it happens again.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1362 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB2kPaMdknlp3ZQnYpDIMIK3FWQAuuu_ks5sR5FVgaJpZM4JUEQZ>
.
|
Thanks for the quick response - is what I am describing the bug in this thread? Happy to schedule a time to upgrade our infrastructure but to be honest I did not see anything significant in the changelog that compelled us to do so. Is this bug addressed in any way? https://modpagespeed.com/doc/release_notes - is very light on any bug fixes. Should I look elsewhere for a more comprehensive list of fixes? |
I feel like this particular issue manifests itself in multiple ways in both
Apache & nginx. I don't think even in 1.12 we've addressed all of them.
But I think there are some Nginx-specific failures that were addressed
between 1.11 and 1.12 that you might be running into.
I can't guarantee, however, that 1.12 will fix the symptom you are seeing.
Unfortunately after considerable effort, we were not able to reproduce the
issues. But because you are nginx I think it is worth giving it a shot.
…-Josh
On Wed, Jul 26, 2017 at 3:24 PM, urifoox ***@***.***> wrote:
Thanks for the quick response - is what I am describing the bug in this
thread?
Happy to schedule a time to upgrade our infrastructure but to be honest I
did not see anything significant in the changelog that compelled us to do
so. Is this bug addressed in any way? https://modpagespeed.com/doc/
release_notes - is very light on any bug fixes. Should I look elsewhere
for a more comprehensive list of fixes?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1362 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB2kPUiFRisFdiWey5OXwo7EAPlPz_Saks5sR5JigaJpZM4JUEQZ>
.
|
Thanks - will go ahead and schedule a rollout of 1.12 soon. For now, curious. how much success has implementing pagespeed HttpCacheCompressionLevel 0; had for people? We just applied it to a server to test and see if the issue goes away. Also, are there any adverse affects we should be aware of from this? The documentation is light on this feature and NginX is already doing a GZIP for us so what are we really losing by changing it from 9 to 0? |
Several people reported success with that workaround, but I confess I don't
fully understand the failure mode.
The advantage of PageSpeed doing the gzip is that (a) it can be more
aggressive (-9) because it caches the result and (b) your server's gzip
module does not have to gzip on the fly for every request and (c) PageSpeed
makes better use of its physical cache by storing compressed assets there.
So it seems to me like a pretty good thing, and shame to lose the benefit.
…On Wed, Jul 26, 2017 at 4:07 PM, urifoox ***@***.***> wrote:
Thanks - will go ahead and schedule a rollout of 1.12 soon. For now,
curious. how much success has implementing pagespeed
HttpCacheCompressionLevel 0; had for people? We just applied it to a server
to test and see if the issue goes away. Also, are there any adverse affects
we should be aware of from this? The documentation is light on this feature
and NginX is already doing a GZIP for us so what are we really losing by
changing it from 9 to 0?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1362 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB2kPQEuLS1z3udff5K_p5NdpXVUwIgiks5sR5yAgaJpZM4JUEQZ>
.
|
Understood. If you are interested in working together to diagnose the issue I am certainly happy to help look into it. Let me know how I can help. Otherwise, thanks for the quick responses. I will update you in a week or so after running it with the option set to '0' to see if we have any other reports of the issue. |
I was curious if this was fixed so I just changed it from 0 to 9 using the latest version (stable-1.12.34.2-0) and users immediately started reporting the same original problem (messed up looking page due to css files not loading). Also available to do anything I can to help diagnose. |
As a follow up - our issues completely disappeared when we set it to 0 with
no noticeable performance impact.
On Jul 30, 2017, at 23:52, Adam Lane <notifications@github.com> wrote:
I was curious if this was fixed so I just changed it from 0 to 9 using the
latest version (stable-1.12.34.2-0) and users immediately started reporting
the same original problem (messed up looking page due to css files not
loading). Also available to do anything I can to help diagnose.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1362 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AVxVjjQMx5ZOVH89G8vTiCXk8jo_slF-ks5sTU9pgaJpZM4JUEQZ>
.
|
I found a diff file that resulted from some research into this quite some time ago, from which I got distracted. Dumped state on the (small) change here so I won't forget it again: bbc9337 The diff potentially is in space related to this issue. May be worth having a look at it again, to see if we need such a change later on, and if so - if anyone can confirm it being related to this or not. |
Thanks! I have a question, and two requests:
In case b) where you have "gzip off;", is that explicitly configured in
nginx.conf?
The reason I am asking is that if you leave it out completely, the
ngx_pagespeed may actually turn it on.
Also:
- Could you test "gzip off;" combined with HttpCacheCompressionLevel 0 ?
- Could you test "gzip off;" combined with HttpCacheCompressionLevel 9?
Otto
…On Mon, Nov 13, 2017 at 11:51 PM Felix Oertel ***@***.***> wrote:
I am not entirely sure that helps, but I stumbled upon this and wanted to
provide some input.
Running nginx 1.13.6 with pagespeed 1.12.34.3-0.
Case a) Enable gzip in nginx, do not set HttpCacheCompressionLevel in
pagespeed config
a.1) svg images not displayed because of encoding issues
a.2) ?PageSpeed=off - svg images displayed correctly.
Case b) Disable gzip (= off) in nginx, do not set
HttpCacheCompressionLevel in pagespeed config
same as Cases a)
Case c) Enable gzip in nginx, set HttpCacheCompressionLevel to something
(=0, =9, does not matter)
svg images displayed correctly in .1 and .2
If I can help any further by testing, please let me know.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1362 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACIsRKf1lKKKRqs7bVrl6qEDi_X1gY1nks5s2MfmgaJpZM4JUEQZ>
.
|
Sorry Otto, I deleted my post in the meantime, because I got the values mixed up while testing. My bad.
This works.
This also works. In both cases the SVG comes without the gzip header. |
Hi Felix, is there a proxy cache or CDN in your setup?
…On Nov 13, 2017 6:22 PM, "Felix Oertel" ***@***.***> wrote:
Sorry Otto, I deleted my post in the meantime, because I got the values
mixed up while testing. My bad.
- Could you test "gzip off;" combined with HttpCacheCompressionLevel 0
?
This works.
- Could you test "gzip off;" combined with HttpCacheCompressionLevel 9?
This also works.
In both cases the SVG comes without the gzip header.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1362 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB2kPe4oetEji-1jsHYkcFkTEjUbsfwIks5s2M9IgaJpZM4JUEQZ>
.
|
Nope, in this enironment there is only the nginx in a docker container, exposing it's port directly via the host. |
If it helps, this problem was only solved on my site when I removed the Gzip from my theme from Yoo Themes. Even setting PageCache not to compress (0) I was still getting css files with weird characters. |
A few people have reported on -discuss mailing lists that mod_pagespeed breaks the content-encoding for some static files, e.g.: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/mod-pagespeed-discuss/Mgm6ZbAeJms
The OP confirms that disabling mod_pagespeed's compressed cache avoids the problem:
ModPagespeedHttpCacheCompressionLevel 0
However that defeats an important optimization, so we'd like to find the root cause of this problem and fix it.
The text was updated successfully, but these errors were encountered: