Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify what is being faster #2

Closed
ariya opened this issue Sep 19, 2016 · 16 comments
Closed

Clarify what is being faster #2

ariya opened this issue Sep 19, 2016 · 16 comments

Comments

@ariya
Copy link

ariya commented Sep 19, 2016

The README mentions several phrases, e.g. "for faster execution", "speed boost", "wrapped in performance.now() measurements". I think it is helpful for the reader (without the need to look at the benchmark code) to be very specific as to what timing metric is being measured and compared.

@nolanlawson
Copy link
Owner

nolanlawson commented Sep 19, 2016

Good point. What could I do to make this clearer?

Essentially I run:

var start = performance.now();
// code goes directly here
var end = performance.now();
report(end - start); // this is a global function defined in index.html

Scripts are loaded 251 times (using random query string parameters to ensure the browser re-parses), then the median is taken. Arguably this should be more than 251 to reduce variance, but 251 seemed like a good balance between high variance and waiting a long time.

How would you recommend I amend the README? I'm happy to accept a pull request. :)

@ariya
Copy link
Author

ariya commented Sep 19, 2016

By "code goes directly here", what does code refer to? Please give an example with a hypothetical micro-library.

@nolanlawson
Copy link
Owner

var start = performance.now();
window.MyModule = {hello: 'world'};
var end = performance.now();
report(end - start);

@tchock
Copy link

tchock commented Sep 19, 2016

So is it that it only boosts initial execution? Execution during runtime (e.g. async tasks and event invoked code) is not touched by the optimization?

@NekR
Copy link

NekR commented Sep 19, 2016

This optimizes parsing, not execution. Parsing is basically done when script is added to a page. So in general, yes, this boosts only startup, but isn't that a good thing already?

@nolanlawson
Copy link
Owner

Yes, parsing is the major thing that's optimized here. My readme is a bit loosey-goosey about definitions; I could be clearer.

@Makio64
Copy link

Makio64 commented Sep 20, 2016

Optimize a JavaScript file for faster parsing execution.
Also benchmark on mobile where the parsing execution is very important will be great.

@medikoo
Copy link

medikoo commented Sep 20, 2016

Indeed it would be good to clarify real world benefit, as by eye it just looks as micro optimization that doesn't bring any noticeable gain.

I take it boost purely parse time, and doesn't bring any improvements to execution time. Is that right? if so, can documentation provide some real world examples (using popular libraries), on why it's useful?

e.g. let's assume that parse time is 10ms, and execution time is 100ms, then if I read the doc correctly, using this module may (in best scenario) speed up my script from 110ms to 105ms.
I'd say that's nothing valuable, and seeing gain numbers as 57.09% in doc is very misleading to unwary developers.

@NekR
Copy link

NekR commented Sep 20, 2016

@medikoo If your parse times is 10ms on mobile, then obviously your don't need this optimization. It's opt-in after all. But thing is, benchmarks shown in readme are done on a good Macbook machine and there parse time numbers already often 50ms. I can clearly see that being 500ms on mobile (requires real benchmarks of course).

Anyway, every ms matters and as it's said in readme -- you shouldn't blindly apply with opt to your project. Test and measure first. If it gives boost, then it makes sense to use this opt.

P.S. But I agree anyway with requirement of clear docs and benchmarks on mobile (otherwise, indeed, you'll have hard time convincing people).

@mgreter
Copy link

mgreter commented Sep 20, 2016

Same here! Would suggest something like "Optimize JavaScript to be parsed faster". Beside that I also wonder if it does affect runtime performance at all (i.e. does it change JIT at all)? I would not apply such a pattern until I'm certain that there are no hidden costs once you actually use the functions. It feels like it just postpones some engine steps that would otherwise be applied while parsing (for better JIT'ing?). Overall I agree that doing as much as possible lazy is a good idea, as long as there are no heavy costs involved later. So this seems use case specific and I guess one might see better performance if i.e. only one jQuery function is used. But testing this would certainly need detailed benchmarks; and interpreting the numbers could be quite difficult.

@NekR
Copy link

NekR commented Sep 20, 2016

@mgreter Kyle wrongly used "JIT" word in this situation. JIT isn't related to lazy parsing, JIT is Just-In-Time compilation to machine code. E.g. for C++ you do that on build step, but for JS browsers do that when they load source code, i.e. Just-In-Time.

@mgreter
Copy link

mgreter commented Sep 20, 2016

@NekR I actually wonder if this pattern influences how V8 can do JIT optimizations?

@nolanlawson
Copy link
Owner

I'm fine to change the phrasing to emphasize "parsing." Execution does occur when the script is loaded, but using that phrase makes it sound like every API call is faster, which is not true.

@ariya
Copy link
Author

ariya commented Sep 20, 2016

@tchock @NekR I wrote a blog post some time ago describing this lazy parsing concept in more details: Lazy Parsing in JavaScript Engines.

Note: pre-parse is V8's specific term. In JavaScriptCore, it's simply called syntax checker.

@ariya
Copy link
Author

ariya commented Sep 20, 2016

@nolanlawson It'll be good to report the standard deviation of those measurements. Probably easier if you just use Benchmark.js. See also this excellent article from @mathias and @jdalton Bulletproof JavaScript benchmarks

@nolanlawson
Copy link
Owner

Thanks for the links; I'll take a look. The benchmarking here could definitely be more rigorous. I didn't use benchmark.js because I didn't see an easy way to hook it in and do the full re-evaluation (e.g. note how I had to append random query params to ensure code wasn't pre-cached/-JITed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants