The "speedup" will vary wildly in real world based on any number of parameters: your actual response time to generate the snippet, vs. amount of JS / CSS, etc. Larger response times will dominate the JS and CSS parse times - in practice many servers are in the 1000ms+ range. As far as JS/CSS parse times, the averages for these, for a browse like Chrome, are in the small single-double digit range... Based on that, there may be room for PJAX, but the gains will likely diminish quickly beyond this demo app.
In other words, as you said in the readme: test it on your real app. If it works for you - great.
This is a workable shim until we have support for Web Components (and Shadow DOM):
With Web Components, we'll have the best of all worlds: reusable components / templates for cleaner markup, ability to leverage browser cache for each component, ability to transfer just the required JSON payload to be fed into the template, and so on. Effectively, what most people are doing manually with custom template engines, but through native browser support, proper caching, prioritization, and so on.
Thanks for looking into this. The master branch is running the benchmarks in production env now and it shows similar results:
% TIMES=100 rspec
user system total real
no turbolinks 1.370000 0.160000 1.770000 ( 15.428230)
yes turbolinks 0.980000 0.060000 1.040000 ( 7.714108)
Regarding parse times vs server times, check out the with_lots_of_sleep branch which does a 0.5 sleep delay on each request to simulate server time. There is still about a 20% improvement with Turbolinks. Of course the numbers will vary depending upon how much js/css an app has vs the server time.
I'm wondering if Chrome is more optimized for this than FireFox. Do you know of a good way to run this test in Chrome?
Also thanks for the tip about Web Components. That sounds exciting.
I wouldn't expect any major difference between Chrome and FF here. If anything, you'll probably end up testing the performance of the driver, not the browser...
A better tool for this would be something like Chrome's benchmarking extension: http://www.chromium.org/developers/design-documents/extensions/how-the-extension-system-works/chrome-benchmarking-extension - unfortunately you can't really script it (afaik) to click on links for the turbolinks test.
With that in mind, you can configure Capybara to use chromedriver:
Capybara.register_driver :selenium do |app|
Capybara::Selenium::Driver.new(app, :browser => :chrome)
Also, command-r is kinda irrelevant, most people use web sites by clicking on links, which won't do the conditional GET.
Regardless, at the end of the day, as we all agree: trying it on your app is the best way of detecting if it speeds your app up. :)
Since this isn't a real repo, I'm gonna leave it open in the hopes that future people will check this out.
Can you also post the specifications of the machine in which this test was run. Ive noticed when running on my retina MBP, the differences at 100 times test is only about 1 second. Which is still better, just not as significant as your test run. Obviously this test would also be better served on testing over a broadband connection to account for server latency.
Ran this test on a 2.6 GHz Retina Macbook Air:
user system total real
no turbolinks 7.510000 0.630000 8.360000 (103.394840)
yes turbolinks 7.340000 0.600000 7.940000 ( 69.419150)
Results not as drastic, this machine is processing the js really fast, it appears.