Enable gzip on sprockets 3.0 #26

Closed
allanbatista opened this Issue Apr 14, 2015 · 39 comments

Comments

Projects
None yet
@allanbatista

Hi, how can i enable the generate GZIP files on Sprockets 3.0?

@josh

This comment has been minimized.

Show comment
Hide comment
@josh

josh Apr 14, 2015

Member

Its noted in the Changelog that it was removed from 3.0.

Outputting .gz variants by default was disabled since most web servers don't actually use them. Configurations like Apache's mod_gzip handles it on the fly with Transfer-Encoding so you don't need to worry about it.

You'll have to implement this yourself if you want it.

Meanwhile, read this fun thread why you probably don't want to deal with this. https://bz.apache.org/bugzilla/show_bug.cgi?id=39727

Member

josh commented Apr 14, 2015

Its noted in the Changelog that it was removed from 3.0.

Outputting .gz variants by default was disabled since most web servers don't actually use them. Configurations like Apache's mod_gzip handles it on the fly with Transfer-Encoding so you don't need to worry about it.

You'll have to implement this yourself if you want it.

Meanwhile, read this fun thread why you probably don't want to deal with this. https://bz.apache.org/bugzilla/show_bug.cgi?id=39727

@shawndeprey

This comment has been minimized.

Show comment
Hide comment
@shawndeprey

shawndeprey May 14, 2015

@josh Oh no, this functionality hasn't been moved to another gem or anything? We used the .gz assets on our CDN cloudfront. Without these files what is the solution for serving precompiled gzip assets from an asset server instead of your application? Has this functionality been moved to a new solution?

@josh Oh no, this functionality hasn't been moved to another gem or anything? We used the .gz assets on our CDN cloudfront. Without these files what is the solution for serving precompiled gzip assets from an asset server instead of your application? Has this functionality been moved to a new solution?

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn May 29, 2015

Member

For the archives: there was some discussion here sstephenson@d388ef7.

Member

fxn commented May 29, 2015

For the archives: there was some discussion here sstephenson@d388ef7.

@chrisnicola

This comment has been minimized.

Show comment
Hide comment
@chrisnicola

chrisnicola Jun 3, 2015

Wow this seems like a terrible change. Right now I'm desperately trying to figure out how to get compressed assets working again with Heroku + Cloudfront CDN. Any suggestions?

Wow this seems like a terrible change. Right now I'm desperately trying to figure out how to get compressed assets working again with Heroku + Cloudfront CDN. Any suggestions?

@josh

This comment has been minimized.

Show comment
Hide comment
@josh

josh Jun 3, 2015

Member

@chrisnicola are you using Custom Origin or S3?

Member

josh commented Jun 3, 2015

@chrisnicola are you using Custom Origin or S3?

@shawndeprey

This comment has been minimized.

Show comment
Hide comment
@shawndeprey

shawndeprey Jun 3, 2015

@chrisnicola what I ended up doing is downgrading sprockets until a solution to this problem is outlined. I am currently using the following with great success:

gem 'sprockets', '~> 2.12.3'

I should note that I am using rails 4.2.1. I am not sure of the issues downgrading sprockets will create but I have not run into any yet.

@chrisnicola what I ended up doing is downgrading sprockets until a solution to this problem is outlined. I am currently using the following with great success:

gem 'sprockets', '~> 2.12.3'

I should note that I am using rails 4.2.1. I am not sure of the issues downgrading sprockets will create but I have not run into any yet.

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jun 3, 2015

Member

@chrisnicola near the botton of the discussion linked above there's a link to a gist with a Capistrano task that compresses.

Member

fxn commented Jun 3, 2015

@chrisnicola near the botton of the discussion linked above there's a link to a gist with a Capistrano task that compresses.

@elia

This comment has been minimized.

Show comment
Hide comment
@elia

elia Jun 3, 2015

Contributor

We're in the same boat using S3&cloudfront. We ended up using some custom take task like those linked but I think it would be much better to have an official replacement gem, like those usually provided for stuff removed from rails.

This is particularly necessary as sprockets is seldom a direct dependency and it's very likely people will find it upgraded without knowing. At least until rails 5 sprockets-rails should depend on the replacement.

Just my opinion of course but thought it was worth sharing.

Contributor

elia commented Jun 3, 2015

We're in the same boat using S3&cloudfront. We ended up using some custom take task like those linked but I think it would be much better to have an official replacement gem, like those usually provided for stuff removed from rails.

This is particularly necessary as sprockets is seldom a direct dependency and it's very likely people will find it upgraded without knowing. At least until rails 5 sprockets-rails should depend on the replacement.

Just my opinion of course but thought it was worth sharing.

@chrisnicola

This comment has been minimized.

Show comment
Hide comment
@chrisnicola

chrisnicola Jun 3, 2015

I'm using Heroku + Rails + Cloudfront. Pretty standard configuration outlined in the Heroku docs, so I'm not sure a Capistrano task is the right direction for me here. I could make a custom buildpack perhaps.

For the time being I've already done as @shawndeprey suggested and downgraded, which is working as a temporary solution.

There are still other issues however, for example SVG files are not compressed at all, something I just recently noticed and Google Page Insights is yelling at me about. What's worse is pre-compressing doesn't work either because of course the digests won't match. What a mess.

I'm talking with Heroku about the best way forward and I'll update this when I have a better solution. Switching from Puma to Passenger might be one option.

I'm using Heroku + Rails + Cloudfront. Pretty standard configuration outlined in the Heroku docs, so I'm not sure a Capistrano task is the right direction for me here. I could make a custom buildpack perhaps.

For the time being I've already done as @shawndeprey suggested and downgraded, which is working as a temporary solution.

There are still other issues however, for example SVG files are not compressed at all, something I just recently noticed and Google Page Insights is yelling at me about. What's worse is pre-compressing doesn't work either because of course the digests won't match. What a mess.

I'm talking with Heroku about the best way forward and I'll update this when I have a better solution. Switching from Puma to Passenger might be one option.

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jun 3, 2015

Member

I really think Sprockets should do this. Why minify, concatenate, hash, ..., everything but not compress? It's a standard optimization just like the other ones, and that's the kind of thing Sprockets is about.

The fact that people using Apache has borked configurations out there is not something Sprockets should care about, as stated in the linked discussion there are valid and common use cases. At most, docs could warn. Rails guides just refused Apache configs.

Member

fxn commented Jun 3, 2015

I really think Sprockets should do this. Why minify, concatenate, hash, ..., everything but not compress? It's a standard optimization just like the other ones, and that's the kind of thing Sprockets is about.

The fact that people using Apache has borked configurations out there is not something Sprockets should care about, as stated in the linked discussion there are valid and common use cases. At most, docs could warn. Rails guides just refused Apache configs.

@josh

This comment has been minimized.

Show comment
Hide comment
@josh

josh Jun 3, 2015

Member

It's a standard optimization just like the other ones

It only applies to CloudFront because they because they don't support on the fly compression. Standard Apache and nginx handle transfer encoding on the fly. As do other the big CDNs like Akamai and Fastly. And Heroku itself doesn't handle encodings at all since its neither apache or nginx.

$ gzip public/assets/*.{css,js}
task :compress { sh "gzip public/assets/*.{css,js}" }
Rake::Task["assets:precompile"].enhance [:compress]

If you're publishing to the S3 bucket you'll still need to make sure a few other header attributes at set on the uploaded content.

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

Member

josh commented Jun 3, 2015

It's a standard optimization just like the other ones

It only applies to CloudFront because they because they don't support on the fly compression. Standard Apache and nginx handle transfer encoding on the fly. As do other the big CDNs like Akamai and Fastly. And Heroku itself doesn't handle encodings at all since its neither apache or nginx.

$ gzip public/assets/*.{css,js}
task :compress { sh "gzip public/assets/*.{css,js}" }
Rake::Task["assets:precompile"].enhance [:compress]

If you're publishing to the S3 bucket you'll still need to make sure a few other header attributes at set on the uploaded content.

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

@josh

This comment has been minimized.

Show comment
Hide comment
@josh

josh Jun 3, 2015

Member

I don't mind writing up a "Deploying Sprockets assets to Cloudfront" guide on the wiki.

Having a simple way to augment the compile task programmatically is definitely better than a magic config.assets.compress = true flag.

Member

josh commented Jun 3, 2015

I don't mind writing up a "Deploying Sprockets assets to Cloudfront" guide on the wiki.

Having a simple way to augment the compile task programmatically is definitely better than a magic config.assets.compress = true flag.

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jun 4, 2015

Member

That task won't generally work well, because it deletes the original files. You want the original files to be able to serve assets without compression. I link my gist here for reference.

Member

fxn commented Jun 4, 2015

That task won't generally work well, because it deletes the original files. You want the original files to be able to serve assets without compression. I link my gist here for reference.

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jun 4, 2015

Member

It only applies to CloudFront because they because they don't support on the fly compression.

To CloudFront, and to the not small number of NGINX users who enable gzip_static, precisely to avoid on the fly compression over and over again with a low ratio, as explained here.

Member

fxn commented Jun 4, 2015

It only applies to CloudFront because they because they don't support on the fly compression.

To CloudFront, and to the not small number of NGINX users who enable gzip_static, precisely to avoid on the fly compression over and over again with a low ratio, as explained here.

@chrisnicola

This comment has been minimized.

Show comment
Hide comment
@chrisnicola

chrisnicola Jun 4, 2015

@josh, while I agree with @fxn, I'm curious can't we just add gzip -k public/assets/*.{css,js,html,svg} to solve the problem of it deleting the originals? I'm willing to do that as a stopgap for now.

Again I don't see what the big deal with supporting compression is in Sprockets. I'm happy for it to be optional, but to not have the option at all makes no sense to me.

@josh, while I agree with @fxn, I'm curious can't we just add gzip -k public/assets/*.{css,js,html,svg} to solve the problem of it deleting the originals? I'm willing to do that as a stopgap for now.

Again I don't see what the big deal with supporting compression is in Sprockets. I'm happy for it to be optional, but to not have the option at all makes no sense to me.

@chrisnicola

This comment has been minimized.

Show comment
Hide comment
@chrisnicola

chrisnicola Jun 4, 2015

This ended up being my solution, at least it seems to be working locally, I'll test it out with Heroku deployment next.

namespace :assets do
  desc "Gzip compress the assets in the /public/assets folder"
  task :gzip do
    sh "find public/assets/ \\( -name '*.js' -o -name '*.css' -o -name '*.html' -o -name '*.svg' \\) -exec test ! -e {}.gz \\; -print0 | xargs -0 gzip -kq9"
  end

  Rake::Task['assets:precompile'].enhance do
    Rake::Task['assets:gzip'].invoke
  end
end

The -r option doesn't work on BSD/OSX systems, it doesn't seem to be a problem without it though.

This ended up being my solution, at least it seems to be working locally, I'll test it out with Heroku deployment next.

namespace :assets do
  desc "Gzip compress the assets in the /public/assets folder"
  task :gzip do
    sh "find public/assets/ \\( -name '*.js' -o -name '*.css' -o -name '*.html' -o -name '*.svg' \\) -exec test ! -e {}.gz \\; -print0 | xargs -0 gzip -kq9"
  end

  Rake::Task['assets:precompile'].enhance do
    Rake::Task['assets:gzip'].invoke
  end
end

The -r option doesn't work on BSD/OSX systems, it doesn't seem to be a problem without it though.

@chrisnicola

This comment has been minimized.

Show comment
Hide comment
@chrisnicola

chrisnicola Jun 4, 2015

Just a brief update, this definitely seems to have issues on Heroku. Extending the task seems to cause it to load/parse test files and test_helper has some stuff which will not run on Heroku (missing Gems). Not sure why this would cause this behaviour.

remote:        find public/assets/ \( -name '*.js' -o -name '*.css' -o -name '*.html' -o -name '*.svg' \) -exec test ! -e {}.gz \; -print0 | xargs -0 gzip -kq9
remote:        [DEPRECATION] requiring "RMagick" is deprecated. Use "rmagick" instead
remote:        rake aborted!
remote:        LoadError: cannot load such file -- pry
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `block in require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:240:in `load_dependency'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/test/test_helper.rb:7:in `<top (required)>'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `block in require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:240:in `load_dependency'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/test/extras/kyc_pdf_test.rb:1:in `<top (required)>'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `block in require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:240:in `load_dependency'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:114:in `block (3 levels) in define'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:114:in `each'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:114:in `block (2 levels) in define'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:113:in `each'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:113:in `block in define'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:20:in `invoke_rake_task'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/testing.rake:8:in `block in <top (required)>'
remote:        Tasks: TOP => test:run => test:extras
remote:        (See full trace by running task with --trace)

Just a brief update, this definitely seems to have issues on Heroku. Extending the task seems to cause it to load/parse test files and test_helper has some stuff which will not run on Heroku (missing Gems). Not sure why this would cause this behaviour.

remote:        find public/assets/ \( -name '*.js' -o -name '*.css' -o -name '*.html' -o -name '*.svg' \) -exec test ! -e {}.gz \; -print0 | xargs -0 gzip -kq9
remote:        [DEPRECATION] requiring "RMagick" is deprecated. Use "rmagick" instead
remote:        rake aborted!
remote:        LoadError: cannot load such file -- pry
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `block in require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:240:in `load_dependency'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/test/test_helper.rb:7:in `<top (required)>'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `block in require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:240:in `load_dependency'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/test/extras/kyc_pdf_test.rb:1:in `<top (required)>'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `block in require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:240:in `load_dependency'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/activesupport-4.2.1/lib/active_support/dependencies.rb:274:in `require'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:114:in `block (3 levels) in define'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:114:in `each'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:114:in `block (2 levels) in define'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:113:in `each'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:113:in `block in define'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/sub_test_task.rb:20:in `invoke_rake_task'
remote:        /tmp/build_9706329bc240321df3d3e3eb472ab064/vendor/bundle/ruby/2.2.0/gems/railties-4.2.1/lib/rails/test_unit/testing.rake:8:in `block in <top (required)>'
remote:        Tasks: TOP => test:run => test:extras
remote:        (See full trace by running task with --trace)
@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jun 4, 2015

Member

@chrisnicola it's convenient that both variants have the same mtime, so I run that one after normalize_assets (there's a comment about it in my gist). I am trusting the docs of gzip_static here, can't say exactly which is the gotcha otherwise, though I suspect it has to be related to consistent headers/cache expiration etc.

Member

fxn commented Jun 4, 2015

@chrisnicola it's convenient that both variants have the same mtime, so I run that one after normalize_assets (there's a comment about it in my gist). I am trusting the docs of gzip_static here, can't say exactly which is the gotcha otherwise, though I suspect it has to be related to consistent headers/cache expiration etc.

@yuri-zubov

This comment has been minimized.

Show comment
Hide comment
@yuri-zubov

yuri-zubov Jun 11, 2015

for the best compression you must install 7z on your server

task :compress_assets_7z do
  on roles(:app) do
    assets_path = release_path.join('public', fetch(:assets_prefix))
    execute "find -L #{assets_path} \\( -name *.js -o -name *.css -o -name *.ico \\) -exec bash -c '[ ! -f {}.gz ] && 7z a -tgzip -mx=9 {}.gz {}' \\; "
  end
end

after 'deploy:normalize_assets', 'compress_assets_7z'

7z has optimized version of GZIP

for the best compression you must install 7z on your server

task :compress_assets_7z do
  on roles(:app) do
    assets_path = release_path.join('public', fetch(:assets_prefix))
    execute "find -L #{assets_path} \\( -name *.js -o -name *.css -o -name *.ico \\) -exec bash -c '[ ! -f {}.gz ] && 7z a -tgzip -mx=9 {}.gz {}' \\; "
  end
end

after 'deploy:normalize_assets', 'compress_assets_7z'

7z has optimized version of GZIP

@eliotsykes

This comment has been minimized.

Show comment
Hide comment
@eliotsykes

eliotsykes Jun 14, 2015

If sprockets does bring back one-time compression, consider using the zopfli gem, it provides higher compression (5% saving allegedly) than gzip and is suited for one-time compression as sprockets used to do (not on-the-fly compression) https://github.com/miyucy/zopfli

From https://en.wikipedia.org/wiki/Zopfli#Properties_and_use_case

Zopfli can output either raw Deflate data stream or encapsulated into gzip or zlib formats. It can be configured to do more or less iterations than the default 15 to trade in processing time for compression efficiency.

Using the default setting, it achieves typically about 5 % better compression than zlib but takes around 80 times longer.[5] Decompression speed is practically unaffected.

Therefore, it is much less suited for on-the-fly compression and pays off if data gets compressed once and served a sufficiently large number of times.[6] This is typically true for static web content that is served with usually Deflate-based HTTP compression or has a Deflate-based file format like PNG or WOFF font files. (Most web clients support only Deflate or gzip, and not more advanced formats like Brotli or xz.) Another field are software updates or downloads with software package files that have a zip-based format like Android application packages (APK) or Java Archives (JAR), especially over mobile connections.

If sprockets does bring back one-time compression, consider using the zopfli gem, it provides higher compression (5% saving allegedly) than gzip and is suited for one-time compression as sprockets used to do (not on-the-fly compression) https://github.com/miyucy/zopfli

From https://en.wikipedia.org/wiki/Zopfli#Properties_and_use_case

Zopfli can output either raw Deflate data stream or encapsulated into gzip or zlib formats. It can be configured to do more or less iterations than the default 15 to trade in processing time for compression efficiency.

Using the default setting, it achieves typically about 5 % better compression than zlib but takes around 80 times longer.[5] Decompression speed is practically unaffected.

Therefore, it is much less suited for on-the-fly compression and pays off if data gets compressed once and served a sufficiently large number of times.[6] This is typically true for static web content that is served with usually Deflate-based HTTP compression or has a Deflate-based file format like PNG or WOFF font files. (Most web clients support only Deflate or gzip, and not more advanced formats like Brotli or xz.) Another field are software updates or downloads with software package files that have a zip-based format like Android application packages (APK) or Java Archives (JAR), especially over mobile connections.

@chrisnicola

This comment has been minimized.

Show comment
Hide comment
@chrisnicola

chrisnicola Jun 19, 2015

My solution going forward has been to extend the assets:precompile task with a ruby based gzip task:

# lib/tasks/assets.rake

require 'zlib'

namespace :assets do
  task :gzip => :environment do
    logger = Logger.new(STDOUT)
    asset_root = Rails.root.join('public', 'assets')
    Dir["#{asset_root}/**/*.{js,css,html,svg}"].each do |asset|
      gz_asset_name = "#{asset}.gz"
      next if File.exist? gz_asset_name
      logger.info "#Compressing #{gz_asset_name}..."
      Zlib::GzipWriter.open(gz_asset_name, Zlib::BEST_COMPRESSION) do |gz_asset|
        gz_asset.mtime = File.mtime(asset)
        gz_asset.orig_name = asset
        gz_asset.write IO.binread(asset)
      end
    end
  end
end

Rake::Task['assets:precompile'].enhance { Rake::Task['assets:gzip'].invoke }

I hope this is helpful for others. I may release it as a gem with some ability to configure what and how files are compressed if people think that would be useful.

My solution going forward has been to extend the assets:precompile task with a ruby based gzip task:

# lib/tasks/assets.rake

require 'zlib'

namespace :assets do
  task :gzip => :environment do
    logger = Logger.new(STDOUT)
    asset_root = Rails.root.join('public', 'assets')
    Dir["#{asset_root}/**/*.{js,css,html,svg}"].each do |asset|
      gz_asset_name = "#{asset}.gz"
      next if File.exist? gz_asset_name
      logger.info "#Compressing #{gz_asset_name}..."
      Zlib::GzipWriter.open(gz_asset_name, Zlib::BEST_COMPRESSION) do |gz_asset|
        gz_asset.mtime = File.mtime(asset)
        gz_asset.orig_name = asset
        gz_asset.write IO.binread(asset)
      end
    end
  end
end

Rake::Task['assets:precompile'].enhance { Rake::Task['assets:gzip'].invoke }

I hope this is helpful for others. I may release it as a gem with some ability to configure what and how files are compressed if people think that would be useful.

@gingerlime

This comment has been minimized.

Show comment
Hide comment
@gingerlime

gingerlime Jun 29, 2015

I'm also confused about why this was removed, and totally agree with this comment:

I really think Sprockets should do this. Why minify, concatenate, hash, ..., everything but not compress? It's a standard optimization just like the other ones, and that's the kind of thing Sprockets is about.

The fact that people using Apache has borked configurations out there is not something Sprockets should care about, as stated in the linked discussion there are valid and common use cases. At most, docs could warn. Rails guides just refused Apache configs.

I couldn't make that much sense from the link, which was given as the background / justification for this. It feels like this 9-year-old apache bug is suddenly important to make backwards-breaking changes to sprockets. Not only that, but if I understand it right, it's not even important enough for Apache to fix...

btw - the documentation still says that the asset pipeline gzips files automatically.

Apologies if I misunderstood something. I did try to read all related comments and make sense of it, but other than workarounds - I still can't quite understand why this change was made in the first place.

I'm also confused about why this was removed, and totally agree with this comment:

I really think Sprockets should do this. Why minify, concatenate, hash, ..., everything but not compress? It's a standard optimization just like the other ones, and that's the kind of thing Sprockets is about.

The fact that people using Apache has borked configurations out there is not something Sprockets should care about, as stated in the linked discussion there are valid and common use cases. At most, docs could warn. Rails guides just refused Apache configs.

I couldn't make that much sense from the link, which was given as the background / justification for this. It feels like this 9-year-old apache bug is suddenly important to make backwards-breaking changes to sprockets. Not only that, but if I understand it right, it's not even important enough for Apache to fix...

btw - the documentation still says that the asset pipeline gzips files automatically.

Apologies if I misunderstood something. I did try to read all related comments and make sense of it, but other than workarounds - I still can't quite understand why this change was made in the first place.

@ksevelyar

This comment has been minimized.

Show comment
Hide comment

+1

@MichaelSp

This comment has been minimized.

Show comment
Hide comment
@MichaelSp

MichaelSp Jul 7, 2015

@gingerlime FULL ACK!

Please bring gzip-support back to sprockets (at least optional)!

@gingerlime FULL ACK!

Please bring gzip-support back to sprockets (at least optional)!

@donmb1

This comment has been minimized.

Show comment
Hide comment

donmb1 commented Jul 14, 2015

+1

@rails rails locked and limited conversation to collaborators Jul 14, 2015

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jul 14, 2015

Member

This is going to come back with a way to opt-in.

Member

fxn commented Jul 14, 2015

This is going to come back with a way to opt-in.

@chancancode

This comment has been minimized.

Show comment
Hide comment
@chancancode

chancancode Jul 14, 2015

Member

Hi Internet friends!

There were some concerns about the issue being locked on the Twitterz, so I just wanted to jump in and say a few things:

First and foremost, thanks for bringing up the feature and your concrete use case!

We receive a large amount of issues/pull requests/comments etc on Rails, over time, this has necessitated some "unconventional" policies which their reasons might not be immediately obvious. For example, we don't accept "cosmetic" pull requests, because over time, it has been proven that they tend to generate lots of noise and requires a disproportionate amount of time/attention that could be better spent on other things, and we think that having that policy would be better of for everyone in the community overall, even though it sometimes means that we have to turn down new contributor's hard work.

We also receive a large amount of +1 comments (in various forms), which could get very distracting for everyone and adds very little value (I wish there is a less intrusive way to express interest on Github). Since Github added the locking feature, we have been experimenting with using in our workflow to improve the signal-vs-noise ratio for everyone. It is a relatively new thing for us, and like I said it's still largely an experiment, so don't have a very clear rules of when to use it. In this particular case, I agree that we tripped the sensor a bit too early, and I apologize for that.

It's also worth mentioning that the use of the locking feature usually has nothing to do with the merit of the original discussion/request. The "flavor" of locking we ended up with is usually more like "seems like we have seen enough arguments from both sides to make a decision (whether for or against the original thread) and get some work done, and the thread is turning into a +1 fest while people are waiting, so let's lock it until we had a chance to discuss more or have other news to report on this front. Again, this was not explained, and I agree that the "Locked!!!" label could communicate something very different, and I would like to apologize for that as well.

Going forward, we are doing two things:

  1. we will raise the bar for tripping the locking sense
  2. we will accompany the locking with a (brief) explanation

Thanks again for your time and contribution!

P.S. as you probably saw from @fxn's comment above, it has already been discussed elsewhere (sorry I couldn't locate the reference) that this feature will be coming back (coincidentally, that took place prior to the locking happened).

Member

chancancode commented Jul 14, 2015

Hi Internet friends!

There were some concerns about the issue being locked on the Twitterz, so I just wanted to jump in and say a few things:

First and foremost, thanks for bringing up the feature and your concrete use case!

We receive a large amount of issues/pull requests/comments etc on Rails, over time, this has necessitated some "unconventional" policies which their reasons might not be immediately obvious. For example, we don't accept "cosmetic" pull requests, because over time, it has been proven that they tend to generate lots of noise and requires a disproportionate amount of time/attention that could be better spent on other things, and we think that having that policy would be better of for everyone in the community overall, even though it sometimes means that we have to turn down new contributor's hard work.

We also receive a large amount of +1 comments (in various forms), which could get very distracting for everyone and adds very little value (I wish there is a less intrusive way to express interest on Github). Since Github added the locking feature, we have been experimenting with using in our workflow to improve the signal-vs-noise ratio for everyone. It is a relatively new thing for us, and like I said it's still largely an experiment, so don't have a very clear rules of when to use it. In this particular case, I agree that we tripped the sensor a bit too early, and I apologize for that.

It's also worth mentioning that the use of the locking feature usually has nothing to do with the merit of the original discussion/request. The "flavor" of locking we ended up with is usually more like "seems like we have seen enough arguments from both sides to make a decision (whether for or against the original thread) and get some work done, and the thread is turning into a +1 fest while people are waiting, so let's lock it until we had a chance to discuss more or have other news to report on this front. Again, this was not explained, and I agree that the "Locked!!!" label could communicate something very different, and I would like to apologize for that as well.

Going forward, we are doing two things:

  1. we will raise the bar for tripping the locking sense
  2. we will accompany the locking with a (brief) explanation

Thanks again for your time and contribution!

P.S. as you probably saw from @fxn's comment above, it has already been discussed elsewhere (sorry I couldn't locate the reference) that this feature will be coming back (coincidentally, that took place prior to the locking happened).

@chancancode chancancode reopened this Jul 14, 2015

@rails rails unlocked this conversation Jul 14, 2015

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jul 14, 2015

Member

Yeah, I wrote gzip support originally in Sprockets.

As expressed in the linked discussions and the commit reverting this feature, I personally think gzipping definitely has valid use cases and is a common optimization for assets, which is the kind of thing Sprockets does. Particularly useful for NGINX users that can trivially enable gzip_static. The removal was motivated by some Apache gotchas that do not invalidate the rest of valid uses cases.

So now that maintenance has shifted to the Rails team, it's going to come back with a way to opt-in (this was discussed in Campfire).

Member

fxn commented Jul 14, 2015

Yeah, I wrote gzip support originally in Sprockets.

As expressed in the linked discussions and the commit reverting this feature, I personally think gzipping definitely has valid use cases and is a common optimization for assets, which is the kind of thing Sprockets does. Particularly useful for NGINX users that can trivially enable gzip_static. The removal was motivated by some Apache gotchas that do not invalidate the rest of valid uses cases.

So now that maintenance has shifted to the Rails team, it's going to come back with a way to opt-in (this was discussed in Campfire).

@eliotsykes

This comment has been minimized.

Show comment
Hide comment
@eliotsykes

eliotsykes Jul 14, 2015

This is fantastic news this is coming back, thank you to all the developers who are taking on the maintenance for this.

@fxn is using the Zopfli higher compression gzip algo potentially part of this work?

This is fantastic news this is coming back, thank you to all the developers who are taking on the maintenance for this.

@fxn is using the Zopfli higher compression gzip algo potentially part of this work?

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jul 14, 2015

Member

@eliotsykes was not aware of that algorithm, seems like it should be something to take into account definitely.

Member

fxn commented Jul 14, 2015

@eliotsykes was not aware of that algorithm, seems like it should be something to take into account definitely.

@chrisnicola

This comment has been minimized.

Show comment
Hide comment
@chrisnicola

chrisnicola Jul 14, 2015

Thanks @chancancode that's great news. As mentioned above a workaround was easily managed eventually.

At this point, I am probably preaching to the choir here, but I think it goes without saying now that the original change came as a surprise of the kind that easily managed to slip into production unnoticed (that part is our fault, but in small startups these things are easy to miss when you are expecting something like this to "just work").

Obviously these things just happen with the best intentions, and blame isn't really important here. Going forward I do hope the impact of changes like these can be recognized before they are merged in. This appears to have been a breaking API change with little notification or documentation. What is more, is that even if the Apache use case makes sense to support it could have been done with a configuration flag, as was suggested.

To give one example that shows the impact scope of this change, this literally breaks the standard Heroku+Rails+Cloudfront use case, which I understand to be pretty common and is basically a recommended setup for Heroku.

I apologize if any of this comes off as me being entitled. I deeply appreciate all the hard work here and recognize genuine desire to improve things. My feedback is genuinely meant to try to help.

Thanks @chancancode that's great news. As mentioned above a workaround was easily managed eventually.

At this point, I am probably preaching to the choir here, but I think it goes without saying now that the original change came as a surprise of the kind that easily managed to slip into production unnoticed (that part is our fault, but in small startups these things are easy to miss when you are expecting something like this to "just work").

Obviously these things just happen with the best intentions, and blame isn't really important here. Going forward I do hope the impact of changes like these can be recognized before they are merged in. This appears to have been a breaking API change with little notification or documentation. What is more, is that even if the Apache use case makes sense to support it could have been done with a configuration flag, as was suggested.

To give one example that shows the impact scope of this change, this literally breaks the standard Heroku+Rails+Cloudfront use case, which I understand to be pretty common and is basically a recommended setup for Heroku.

I apologize if any of this comes off as me being entitled. I deeply appreciate all the hard work here and recognize genuine desire to improve things. My feedback is genuinely meant to try to help.

@schneems

This comment has been minimized.

Show comment
Hide comment
@schneems

schneems Jul 31, 2015

Member

Heroku itself doesn't handle encodings at all since its neither apache or nginx

I added gzip support for ActionDispatch::Static so it works by default out of the box on Heroku: rails/rails@cfaaacd and rails/rails@8e31fa3

If CDNs will gzip content on the fly do we have any data or benchmarks on the difference between them doing that or them serving pre-gzipped content?

It looks like cloudfront doesn't gzip contents:

CloudFront doesn't compress the files itself. Instead, it relies on receiving compressed files from your origin. The process for serving compressed files depends on whether you're using a custom origin or Amazon S3:

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

Member

schneems commented Jul 31, 2015

Heroku itself doesn't handle encodings at all since its neither apache or nginx

I added gzip support for ActionDispatch::Static so it works by default out of the box on Heroku: rails/rails@cfaaacd and rails/rails@8e31fa3

If CDNs will gzip content on the fly do we have any data or benchmarks on the difference between them doing that or them serving pre-gzipped content?

It looks like cloudfront doesn't gzip contents:

CloudFront doesn't compress the files itself. Instead, it relies on receiving compressed files from your origin. The process for serving compressed files depends on whether you're using a custom origin or Amazon S3:

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html

@fxn

This comment has been minimized.

Show comment
Hide comment
@fxn

fxn Jul 31, 2015

Member

CloudFront is one use case. The other one is being able to use max compression using a slow algorithm, and at the same time having your webserver do nothing. Hard to beat!

Member

fxn commented Jul 31, 2015

CloudFront is one use case. The other one is being able to use max compression using a slow algorithm, and at the same time having your webserver do nothing. Hard to beat!

@wesww

This comment has been minimized.

Show comment
Hide comment

wesww commented Aug 4, 2015

+1

@rafaelfranca

This comment has been minimized.

Show comment
Hide comment
@rafaelfranca

rafaelfranca Aug 4, 2015

Member

Locking the issue again. Please see #26 (comment) for the reason.

Member

rafaelfranca commented Aug 4, 2015

Locking the issue again. Please see #26 (comment) for the reason.

@rails rails locked and limited conversation to collaborators Aug 4, 2015

@rafaelfranca rafaelfranca added this to the 3.x milestone Aug 13, 2015

@schneems

This comment has been minimized.

Show comment
Hide comment
@schneems

schneems Dec 2, 2015

Member

Linking this to sstephenson#589 which is where gzip was removed in 3.0

Member

schneems commented Dec 2, 2015

Linking this to sstephenson#589 which is where gzip was removed in 3.0

@schneems

This comment has been minimized.

Show comment
Hide comment
@schneems

schneems Dec 2, 2015

Member

Doing research on why/why-not we might not want to bring back gzipped assets. This was an argument against:

Meanwhile, read this fun thread why you probably don't want to deal with this. https://bz.apache.org/bugzilla/show_bug.cgi?id=39727

ETag header is consumed by the browser and then sent in future requests in the If-None-Match header where it can be used to determine by the server if the a new response is needed or if a 304 not modified response is acceptable. More etag info https://en.wikipedia.org/wiki/HTTP_ETag. Rails facilitates this type of behavior this with it's stale? check.

Sprockets server also does etag checks in server.rb:

def not_modified_response(env, etag)
[ 304, cache_headers(env, etag), [] ]
end
. It seems like the problem isn't that we are generating gzipped file, but rather that we are considering the hexdigest equivalent to the etag. Since the hexdigest of a gzipped file is the same as the non-gzipped file we are violating having a unique identifier for each request.

The thing serving the asset should be the thing setting the etag. The bug then would be in how apache/nginx chooses the etag to use. For example we aren't setting it or using it in ActionDispatch::Static so it shouldn't be an issue. The sprockets server did implement etag support as linked above. A fix would be for it to use a different etag than the fingerprint. For example if application-12345.css is served with an etag of 12345 then we could append gz to generate a new unique etag like 12345gz when we were serving the different file.

Member

schneems commented Dec 2, 2015

Doing research on why/why-not we might not want to bring back gzipped assets. This was an argument against:

Meanwhile, read this fun thread why you probably don't want to deal with this. https://bz.apache.org/bugzilla/show_bug.cgi?id=39727

ETag header is consumed by the browser and then sent in future requests in the If-None-Match header where it can be used to determine by the server if the a new response is needed or if a 304 not modified response is acceptable. More etag info https://en.wikipedia.org/wiki/HTTP_ETag. Rails facilitates this type of behavior this with it's stale? check.

Sprockets server also does etag checks in server.rb:

def not_modified_response(env, etag)
[ 304, cache_headers(env, etag), [] ]
end
. It seems like the problem isn't that we are generating gzipped file, but rather that we are considering the hexdigest equivalent to the etag. Since the hexdigest of a gzipped file is the same as the non-gzipped file we are violating having a unique identifier for each request.

The thing serving the asset should be the thing setting the etag. The bug then would be in how apache/nginx chooses the etag to use. For example we aren't setting it or using it in ActionDispatch::Static so it shouldn't be an issue. The sprockets server did implement etag support as linked above. A fix would be for it to use a different etag than the fingerprint. For example if application-12345.css is served with an etag of 12345 then we could append gz to generate a new unique etag like 12345gz when we were serving the different file.

@schneems

This comment has been minimized.

Show comment
Hide comment
@schneems

schneems Dec 3, 2015

Member

I've got an implementation in #193 please take a look.

@eliotsykes i'm not against using a different compression algorithm (zopfli). I'm a little wary of using that library though. It has 10k downloads on Rubygems and only 2 people have ever touched it. If it gets abandoned I don't want to be stuck maintaining a compression algorithm.

A different algorithm It is worth looking into more. Could you write a small script to benchmark speed between zopfli and Zlib::GzipWriter.new(f, Zlib::BEST_COMPRESSION) to show compression size difference in file size? You can use http://issuetriage-herokuapp-com.global.ssl.fastly.net/assets/application-c4dfdc831b963c426131ae50f1facfbe6de899ad7f860fb7758b0706ab230e7a.js for a reference document.

Since this issue is locked, you can comment on my PR #193 thanks for the suggestion.

Member

schneems commented Dec 3, 2015

I've got an implementation in #193 please take a look.

@eliotsykes i'm not against using a different compression algorithm (zopfli). I'm a little wary of using that library though. It has 10k downloads on Rubygems and only 2 people have ever touched it. If it gets abandoned I don't want to be stuck maintaining a compression algorithm.

A different algorithm It is worth looking into more. Could you write a small script to benchmark speed between zopfli and Zlib::GzipWriter.new(f, Zlib::BEST_COMPRESSION) to show compression size difference in file size? You can use http://issuetriage-herokuapp-com.global.ssl.fastly.net/assets/application-c4dfdc831b963c426131ae50f1facfbe6de899ad7f860fb7758b0706ab230e7a.js for a reference document.

Since this issue is locked, you can comment on my PR #193 thanks for the suggestion.

@rafaelfranca

This comment has been minimized.

Show comment
Hide comment
@rafaelfranca

rafaelfranca Dec 8, 2015

Member

This is done already. Closing

Member

rafaelfranca commented Dec 8, 2015

This is done already. Closing

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.