Skip to content
This repository has been archived by the owner on Apr 21, 2023. It is now read-only.

content-encoding -- suspected intermittent double-gzip encoding on css/js #1362

Open
jmarantz opened this issue Jul 25, 2016 · 37 comments
Open

Comments

@jmarantz
Copy link
Contributor

A few people have reported on -discuss mailing lists that mod_pagespeed breaks the content-encoding for some static files, e.g.: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/mod-pagespeed-discuss/Mgm6ZbAeJms

The OP confirms that disabling mod_pagespeed's compressed cache avoids the problem:
ModPagespeedHttpCacheCompressionLevel 0

However that defeats an important optimization, so we'd like to find the root cause of this problem and fix it.

@jmarantz jmarantz assigned jmarantz and crowell and unassigned jmarantz Jul 25, 2016
@jmarantz
Copy link
Contributor Author

@crowell
Copy link
Contributor

crowell commented Jul 26, 2016

trying to reproduce this by removing files from the cache per our offline discussion.

First fetching http://localhost:8080/mod_pagespeed_example/combine_css.html?PageSpeed=on&PageSpeedFilters=combine_css,rewrite_css

which pulls in the file http://localhost:8080/mod_pagespeed_example/styles/A.yellow.css+blue.css+big.css+bold.css,Mcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css

Go into the on-disk cache and remove the cached (gzipped) minified file and some of the files that make it up

$ rm A.yellow.css+blue.css+big.css+bold.css,2CMcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css, yellow.css+blue.css+big.css+bold.css.pagespeed.cc.xo4He3_gYf.css, big.css,

fetch the page/resource again, and it is still not corrupted.

@jmarantz
Copy link
Contributor Author

How are you doing the fetch? wget? What line?

did you disable any L1 caches?

On Tue, Jul 26, 2016 at 1:54 PM, Jeffrey Crowell notifications@github.com
wrote:

trying to reproduce this by removing files from the cache per our offline
discussion.

First fetching
http://localhost:8080/mod_pagespeed_example/combine_css.html?PageSpeed=on&PageSpeedFilters=combine_css,rewrite_css

which pulls in the file
http://localhost:8080/mod_pagespeed_example/styles/A.yellow.css+blue.css+big.css+bold.css,Mcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css

Go into the on-disk cache and remove the cached (gzipped) minified file
and some of the files that make it up

$ rm A.yellow.css+blue.css+big.css+bold.css,2CMcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css, yellow.css+blue.css+big.css+bold.css.pagespeed.cc.xo4He3_gYf.css, big.css,

fetch the page/resource again, and it is still not corrupted.


You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
#1362 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB2kPU5pYxw0YNue5qDioJvneRZOGYjcks5qZknEgaJpZM4JUEQZ
.

@jmarantz
Copy link
Contributor Author

Also be sure to set a breakpoint in RewriteDriver::Clone which should be
waking up on the nested resource reconstruction. Hopefully you'll be able
to see the problem in the debugger, or see why it doesn't occur.

During clone, do we have accept-encoding:gzip on this->request_headers and
clone->request_headers?

Setting a breakpoint on the transition from the nested reconstruction
(cpmbine_css) to the outer reconstruction (rewrite_css) maybe you'll see
gzipped content.

On Tue, Jul 26, 2016 at 1:58 PM, Joshua Marantz jmarantz@google.com wrote:

How are you doing the fetch? wget? What line?

did you disable any L1 caches?

On Tue, Jul 26, 2016 at 1:54 PM, Jeffrey Crowell <notifications@github.com

wrote:

trying to reproduce this by removing files from the cache per our offline
discussion.

First fetching
http://localhost:8080/mod_pagespeed_example/combine_css.html?PageSpeed=on&PageSpeedFilters=combine_css,rewrite_css

which pulls in the file
http://localhost:8080/mod_pagespeed_example/styles/A.yellow.css+blue.css+big.css+bold.css,Mcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css

Go into the on-disk cache and remove the cached (gzipped) minified file
and some of the files that make it up

$ rm A.yellow.css+blue.css+big.css+bold.css,2CMcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css, yellow.css+blue.css+big.css+bold.css.pagespeed.cc.xo4He3_gYf.css, big.css,

fetch the page/resource again, and it is still not corrupted.


You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
#1362 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB2kPU5pYxw0YNue5qDioJvneRZOGYjcks5qZknEgaJpZM4JUEQZ
.

@crowell
Copy link
Contributor

crowell commented Jul 26, 2016

I tried with firefox, chrome, and curl (not sending the accept-encoding header for curl). (as well a combination of them).

just curl http://localhost:8080/mod_pagespeed_example/styles/A.yellow.css+blue.css+big.css+bold.css,Mcc.xo4He3_gYf.css.pagespeed.cf.3Ea3akSdRD.css

or visiting the page in one of the browsers.

This is testing the master config in the debug.conf.template, which doesn't have L1 cache enabled.

One thing that makes me thing that it might not be this issue is that the comment here
pimcore/pimcore#762 (comment) states that just unzipping the file gives a valid css file. If the case were that a non-gzipped file was getting combined with a gzipped one that is treated as text, then just part of it would be garbage, no?

@crowell
Copy link
Contributor

crowell commented Jul 26, 2016

@jmarantz yes, both the clone and original RewriteDrivers have the same Accept-Encoding headers.

(gdb) p this->request_headers_->ToString().c_str()
$8 = 0x7fffe000aa20 "GET  HTTP/1.1\r\nHost: localhost:8080\r\nConnection: keep-alive\r\nUpgrade-Insecure-Requests: 1\r\nUser-
Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36\r\nAccept: t
ext/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, sdch\r\nAccept-L
anguage: en-US,en;q=0.8\r\nCookie: _ga_psi_internal=GA1.1.712307566.1459367749\r\n\r\n"
(gdb) p result->request_headers_->ToString().c_str()
$9 = 0x7fffe000ac30 "GET  HTTP/1.1\r\nHost: localhost:8080\r\nConnection: keep-alive\r\nUpgrade-Insecure-Requests: 1\r\nUser-
Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36\r\nAccept: t
ext/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, sdch\r\nAccept-L
anguage: en-US,en;q=0.8\r\nCookie: _ga_psi_internal=GA1.1.712307566.1459367749\r\n\r\n"

@morlovich
Copy link
Contributor

That shouldn't matter, though, since filters should be using
Resource::ExtractUncompressedContents(), no?
Hmm, maybe something funny happens in one of the fallback paths? We
could conceivably forward the bits but not
forward the headers correctly.

On Tue, Jul 26, 2016 at 2:20 PM, Jeffrey Crowell
notifications@github.com wrote:

@jmarantz yes, both the clone and original RewriteDrivers have the same
Accept-Encoding headers.

(gdb) p this->request_headers_->ToString().c_str()
$8 = 0x7fffe000aa20 "GET HTTP/1.1\r\nHost: localhost:8080\r\nConnection:
keep-alive\r\nUpgrade-Insecure-Requests: 1\r\nUser-
Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/51.0.2704.106 Safari/537.36\r\nAccept: t
ext/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8\r\nAccept-Encoding:
gzip, deflate, sdch\r\nAccept-L
anguage: en-US,en;q=0.8\r\nCookie:
ga_psi_internal=GA1.1.712307566.1459367749\r\n\r\n"
(gdb) p result->request_headers
->ToString().c_str()
$9 = 0x7fffe000ac30 "GET HTTP/1.1\r\nHost: localhost:8080\r\nConnection:
keep-alive\r\nUpgrade-Insecure-Requests: 1\r\nUser-
Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/51.0.2704.106 Safari/537.36\r\nAccept: t
ext/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8\r\nAccept-Encoding:
gzip, deflate, sdch\r\nAccept-L
anguage: en-US,en;q=0.8\r\nCookie:
_ga_psi_internal=GA1.1.712307566.1459367749\r\n\r\n"


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@jmarantz
Copy link
Contributor Author

Right; my theory is that we should strip the accept-encoding:gzip in
RewriteDriver::Clone just as we are stripping Via:1.1 Google.

However it'd be good to convince ourselves that this is the problem, and
robust use of Resource::ExtractUncompressedContents() would tend to suggest
maybe not. But it seems like we should dive a bit deeper into that.

On Tue, Jul 26, 2016 at 2:25 PM, Maks Orlovich notifications@github.com
wrote:

That shouldn't matter, though, since filters should be using
Resource::ExtractUncompressedContents(), no?
Hmm, maybe something funny happens in one of the fallback paths? We
could conceivably forward the bits but not
forward the headers correctly.

On Tue, Jul 26, 2016 at 2:20 PM, Jeffrey Crowell
notifications@github.com wrote:

@jmarantz yes, both the clone and original RewriteDrivers have the same
Accept-Encoding headers.

(gdb) p this->request_headers_->ToString().c_str()
$8 = 0x7fffe000aa20 "GET HTTP/1.1\r\nHost: localhost:8080\r\nConnection:
keep-alive\r\nUpgrade-Insecure-Requests: 1\r\nUser-
Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/51.0.2704.106 Safari/537.36\r\nAccept: t

ext/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8\r\nAccept-Encoding:
gzip, deflate, sdch\r\nAccept-L
anguage: en-US,en;q=0.8\r\nCookie:
ga_psi_internal=GA1.1.712307566.1459367749\r\n\r\n"
(gdb) p result->request_headers
->ToString().c_str()
$9 = 0x7fffe000ac30 "GET HTTP/1.1\r\nHost: localhost:8080\r\nConnection:
keep-alive\r\nUpgrade-Insecure-Requests: 1\r\nUser-
Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/51.0.2704.106 Safari/537.36\r\nAccept: t

ext/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8\r\nAccept-Encoding:
gzip, deflate, sdch\r\nAccept-L
anguage: en-US,en;q=0.8\r\nCookie:
_ga_psi_internal=GA1.1.712307566.1459367749\r\n\r\n"


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#1362 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB2kPSN6E8FhlLg6Eaj-1ASfiy7eqYVBks5qZlEdgaJpZM4JUEQZ
.

@brusch
Copy link

brusch commented Jul 27, 2016

Ok, as promised a simple test case to reproduce the problem.

Apache

Server version: Apache/2.4.10 (Debian)
Server built:   Jul 20 2016 06:48:18

Pagespeed

X-Mod-Pagespeed:1.11.33.2-0

PHP Script (test.php)

<?php

header("Connection: close\r\n");
header("Cache-Control: max-age=86400");
header("Pragma: ", true);
header("Content-Type: text/javascript");
header("Expires: " . gmdate("D, d M Y H:i:s", time() + 86400) . " GMT");

$data = str_pad("", 2000, uniqid());

$output = "\x1f\x8b\x08\x00\x00\x00\x00\x00" .
    substr(gzcompress($data, 2), 0, -4) .
    pack('V', crc32($data)) .
    pack('V', mb_strlen($data, "latin1"));

header("Content-Encoding: gzip\r\n");
header("Content-Length: " . mb_strlen($output, "latin1"));

echo $output;

It does not matter whether you're using FastCGI or mod_php for this script.

Description

The problem happens as soon as you're enabling the Apache module with a2dismod pagespeed, even if you're turning off Pagespeed using ModPagespeed Off in your vhost or .htaccess the problem persists.
When calling the Script the first time, everything is fine and the response contains the proper headers. The problem happens as soon as you're calling the script the second time, then it seems that the contents are served from the cache (strange since ModPagespeed Off) because the request isn't passed to PHP anymore. The 2nd request doesn't contain Content-Enconding: gzip anymore, every other headers persist, but since the encoding is missing, the client doesn't decode the payload.

First request:
http://localhost/test.php

HTTP/1.1 200 OK
Date: Wed, 27 Jul 2016 05:07:50 GMT
Server: Apache
Connection: close
Cache-Control: max-age=86400
Pragma: 
Expires: Thu, 28 Jul 2016 05:07:50 GMT
Content-Encoding: gzip
X-Content-Type-Options: nosniff
Content-Length: 55
Content-Type: text/javascript;charset=UTF-8

Second and all further requests:
http://localhost/test.php

Cache-Control:max-age=86400
Connection:Keep-Alive
Content-Length:55
Content-Type:text/javascript;charset=UTF-8
Date:Wed, 27 Jul 2016 05:08:32 GMT
Etag:W/"PSA-B2pE8_dsqN"
Expires:Thu, 28 Jul 2016 05:07:50 GMT
Keep-Alive:timeout=5, max=100
Pragma:
Server:Apache
Vary:Accept-Encoding
X-Content-Type-Options:nosniff
X-Content-Type-Options:nosniff
X-Original-Content-Length:55

Let me know if I can do anything further for you.

@brusch
Copy link

brusch commented Jul 27, 2016

@crowell sorry, forgot to mention you :) #1362 (comment)

@Cruiser13
Copy link

This is a major issue for a lot of other systems gzipping files, the e-learning system Moodle for example.

@jmarantz
Copy link
Contributor Author

Bernhard: thanks for the repro! I'm not 100% sure this is the same bug,
but maybe those are two new bugs:

  • ipro-rewriting enabled even though MPS is off
  • swallowing the content-encoding, or something.
    it's worth investigating, anyway.

I think the issue we were seeing is that we were sending content-encoding,
but the browser could not decode our content-encoding (possibly because it
was double-gzipped).

Lennart: I'm not sure I follow; you are just saying that systems other than
mod_pagespeed have similar bugs?

On Wed, Jul 27, 2016 at 7:30 AM, Lennart Sauter notifications@github.com
wrote:

This is a major issue for a lot of other systems gzipping files, the
e-learning system Moodle for example.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#1362 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB2kPXIvrYVOnW5iGNFkNy7Jae-oLFdqks5qZ0E6gaJpZM4JUEQZ
.

@Cruiser13
Copy link

@jmarantz no I'm saying this bug does cause misfunctions in Moodle and other PHP software aswell (beside Pimcore). Latest Moodle 3.X does also not work with pagespeed because the content is double gzipped.

@brusch
Copy link

brusch commented Jul 27, 2016

I don't think it's double gzipped (see my description above), the problem is only a missing header. :)

@crowell
Copy link
Contributor

crowell commented Jul 27, 2016

I can reproduce it here. thanks!

Trying to find why the content-encoding header seems to disappear in this case now, hope to have a fix soon.

@jmarantz
Copy link
Contributor Author

Bernard -- the point I'm making is that this bug references some users who
have found that we were sending content-encoding:gzip with a response that
the browser could not decode. Your repro is of a case where we were
missing the content-encoding:gzip. It might be the same bug, but I'm not
100% sure.

Lennart: thanks!! I wonder if you could give more details on how to
reproduce with Moodle 3.X and Pimcore, as we are not familiar with these
packages.

On Wed, Jul 27, 2016 at 10:39 AM, Jeffrey Crowell notifications@github.com
wrote:

I can reproduce it here. thanks!

Trying to find why the content-encoding header seems to disappear in this
case now, hope to have a fix soon.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#1362 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB2kPYl1rtIt-x2LAl28110VXMaJGqXUks5qZ22jgaJpZM4JUEQZ
.

@Cruiser13
Copy link

Cruiser13 commented Jul 27, 2016

@jmarantz I think Bernhards test case does include the issue for Moodle, Pimcore and other CMS (one of our drupal installations has same issues after updating pagespeed) just fine. They both output gzipped css/js files and display strange chars if mod_pagespeed is active (and not completely disabled with a2dismod).

@crowell
Copy link
Contributor

crowell commented Jul 27, 2016

@jmarantz investigating with the debugger, it does seem that it is "double" gzipped. It gets extracted the first time in ValidateCandidate and then the Content-Encoding header is dropped, and a singly-compressed version is served to the user without the header.

@jmarantz
Copy link
Contributor Author

Thanks everyone for the repro. Hopefully we can get this resolved quickly and put out a fix with our next patch.

@Enalmada
Copy link

Enalmada commented Jul 27, 2016

I am testing out mod_pagespeed (1.11.33.2-0) behind aws elb with a fairly default core filters test server and occasionally getting "ERR_CONTENT_DECODING_FAILED" on one of my css files near startup. I see "double" gzipped in the above comments, but for me when the problem happens, the css file has header "Content-Encoding:gzip" but the Content-Length header is the full file size (134kb), not the zipped size which makes me think there is no zipping going on. This file also says "max-age=300,private" when all the other css files are marked public. Next refresh the file works in the browser: contains the compressed size and marked public. I originally got this using combine css filter and when I disable combine css, I consistently get it only on my one big file. I have no other interesting modules that I know of running. I noticed this problem when I was using CloudFront in front of the static file but KeyCdn also did the same thing. The file is served from my application server PlayFramework, not using ModPagespeedLoadFromFile.

I am new to pagespeed and just guessing that because this only happens on my one big css file, if it misses the flush window it is being sent out incorrectly marked as gzip when it really isn't, Is this the same issue?

@jmarantz
Copy link
Contributor Author

Enalmada : I think this is indeed the bug, and the good news that I think we are getting more confident in a strategy to fix. Sorry your initial experience with PageSpeed ran into this trouble. Out of curiosity, does setting
ModPagespeedHttpCacheCompressionLevel 0
work around the problem for you?

@Enalmada
Copy link

I have not seen the error since I last enabled the setting, thanks! Sorry for not trying that right away.

@urifoox
Copy link

urifoox commented Jul 26, 2017

I believe we are running into this issue as well on our CSS files intermittently. We are running 1.11.33.4-beta

curl: (18) transfer closed with 313 bytes remaining to read

Looks like the server is sending invalid Content-Length header and the file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn’t match the previously given size.

nginx version: nginx/1.10.3
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
built with OpenSSL 1.0.2j 26 Sep 2016
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --add-module=/root/ngx_pagespeed-release-1.11.33.4-beta --with-ipv6

We have no easy way to reproduce this but I just happened to find this thread. Was there ever a fix for it? For now we have added

    pagespeed HttpCacheCompressionLevel 0;

and will see if it happens again.

@jmarantz
Copy link
Contributor Author

jmarantz commented Jul 26, 2017 via email

@urifoox
Copy link

urifoox commented Jul 26, 2017

Thanks for the quick response - is what I am describing the bug in this thread?

Happy to schedule a time to upgrade our infrastructure but to be honest I did not see anything significant in the changelog that compelled us to do so. Is this bug addressed in any way? https://modpagespeed.com/doc/release_notes - is very light on any bug fixes. Should I look elsewhere for a more comprehensive list of fixes?

@jmarantz
Copy link
Contributor Author

jmarantz commented Jul 26, 2017 via email

@urifoox
Copy link

urifoox commented Jul 26, 2017

Thanks - will go ahead and schedule a rollout of 1.12 soon. For now, curious. how much success has implementing pagespeed HttpCacheCompressionLevel 0; had for people? We just applied it to a server to test and see if the issue goes away. Also, are there any adverse affects we should be aware of from this? The documentation is light on this feature and NginX is already doing a GZIP for us so what are we really losing by changing it from 9 to 0?

@jmarantz
Copy link
Contributor Author

jmarantz commented Jul 26, 2017 via email

@urifoox
Copy link

urifoox commented Jul 26, 2017

Understood.

If you are interested in working together to diagnose the issue I am certainly happy to help look into it. Let me know how I can help. Otherwise, thanks for the quick responses. I will update you in a week or so after running it with the option set to '0' to see if we have any other reports of the issue.

@Enalmada
Copy link

I was curious if this was fixed so I just changed it from 0 to 9 using the latest version (stable-1.12.34.2-0) and users immediately started reporting the same original problem (messed up looking page due to css files not loading). Also available to do anything I can to help diagnose.

@urifoox
Copy link

urifoox commented Jul 31, 2017 via email

@oschaaf
Copy link
Member

oschaaf commented Nov 6, 2017

I found a diff file that resulted from some research into this quite some time ago, from which I got distracted. Dumped state on the (small) change here so I won't forget it again: bbc9337

The diff potentially is in space related to this issue. May be worth having a look at it again, to see if we need such a change later on, and if so - if anyone can confirm it being related to this or not.

@oschaaf
Copy link
Member

oschaaf commented Nov 13, 2017 via email

@foertel
Copy link

foertel commented Nov 13, 2017

Sorry Otto, I deleted my post in the meantime, because I got the values mixed up while testing. My bad.

  • Could you test "gzip off;" combined with HttpCacheCompressionLevel 0 ?

This works.

  • Could you test "gzip off;" combined with HttpCacheCompressionLevel 9?

This also works.

In both cases the SVG comes without the gzip header.

@jmarantz
Copy link
Contributor Author

jmarantz commented Nov 13, 2017 via email

@foertel
Copy link

foertel commented Nov 17, 2017

Nope, in this enironment there is only the nginx in a docker container, exposing it's port directly via the host.

@rsangion
Copy link

If it helps, this problem was only solved on my site when I removed the Gzip from my theme from Yoo Themes. Even setting PageCache not to compress (0) I was still getting css files with weird characters.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants