-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify what is being faster #2
Comments
Good point. What could I do to make this clearer? Essentially I run: var start = performance.now();
// code goes directly here
var end = performance.now();
report(end - start); // this is a global function defined in index.html Scripts are loaded 251 times (using random query string parameters to ensure the browser re-parses), then the median is taken. Arguably this should be more than 251 to reduce variance, but 251 seemed like a good balance between high variance and waiting a long time. How would you recommend I amend the README? I'm happy to accept a pull request. :) |
By "code goes directly here", what does code refer to? Please give an example with a hypothetical micro-library. |
var start = performance.now();
window.MyModule = {hello: 'world'};
var end = performance.now();
report(end - start); |
So is it that it only boosts initial execution? Execution during runtime (e.g. async tasks and event invoked code) is not touched by the optimization? |
This optimizes parsing, not execution. Parsing is basically done when script is added to a page. So in general, yes, this boosts only startup, but isn't that a good thing already? |
Yes, parsing is the major thing that's optimized here. My readme is a bit loosey-goosey about definitions; I could be clearer. |
Optimize a JavaScript file for faster parsing execution. |
Indeed it would be good to clarify real world benefit, as by eye it just looks as micro optimization that doesn't bring any noticeable gain. I take it boost purely parse time, and doesn't bring any improvements to execution time. Is that right? if so, can documentation provide some real world examples (using popular libraries), on why it's useful? e.g. let's assume that parse time is 10ms, and execution time is 100ms, then if I read the doc correctly, using this module may (in best scenario) speed up my script from 110ms to 105ms. |
@medikoo If your parse times is 10ms on mobile, then obviously your don't need this optimization. It's opt-in after all. But thing is, benchmarks shown in readme are done on a good Macbook machine and there parse time numbers already often 50ms. I can clearly see that being 500ms on mobile (requires real benchmarks of course). Anyway, every ms matters and as it's said in readme -- you shouldn't blindly apply with opt to your project. Test and measure first. If it gives boost, then it makes sense to use this opt. P.S. But I agree anyway with requirement of clear docs and benchmarks on mobile (otherwise, indeed, you'll have hard time convincing people). |
Same here! Would suggest something like "Optimize JavaScript to be parsed faster". Beside that I also wonder if it does affect runtime performance at all (i.e. does it change JIT at all)? I would not apply such a pattern until I'm certain that there are no hidden costs once you actually use the functions. It feels like it just postpones some engine steps that would otherwise be applied while parsing (for better JIT'ing?). Overall I agree that doing as much as possible lazy is a good idea, as long as there are no heavy costs involved later. So this seems use case specific and I guess one might see better performance if i.e. only one jQuery function is used. But testing this would certainly need detailed benchmarks; and interpreting the numbers could be quite difficult. |
@mgreter Kyle wrongly used "JIT" word in this situation. JIT isn't related to lazy parsing, JIT is Just-In-Time compilation to machine code. E.g. for C++ you do that on build step, but for JS browsers do that when they load source code, i.e. Just-In-Time. |
@NekR I actually wonder if this pattern influences how V8 can do JIT optimizations? |
I'm fine to change the phrasing to emphasize "parsing." Execution does occur when the script is loaded, but using that phrase makes it sound like every API call is faster, which is not true. |
@tchock @NekR I wrote a blog post some time ago describing this lazy parsing concept in more details: Lazy Parsing in JavaScript Engines. Note: pre-parse is V8's specific term. In JavaScriptCore, it's simply called syntax checker. |
@nolanlawson It'll be good to report the standard deviation of those measurements. Probably easier if you just use Benchmark.js. See also this excellent article from @mathias and @jdalton Bulletproof JavaScript benchmarks |
Thanks for the links; I'll take a look. The benchmarking here could definitely be more rigorous. I didn't use benchmark.js because I didn't see an easy way to hook it in and do the full re-evaluation (e.g. note how I had to append random query params to ensure code wasn't pre-cached/-JITed). |
The README mentions several phrases, e.g. "for faster execution", "speed boost", "wrapped in performance.now() measurements". I think it is helpful for the reader (without the need to look at the benchmark code) to be very specific as to what timing metric is being measured and compared.
The text was updated successfully, but these errors were encountered: