New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: remove %OptimizeFunctionOnNextCall #9615

Closed
wants to merge 11 commits into
base: master
from

Conversation

Projects
None yet
@bzoz
Contributor

bzoz commented Nov 15, 2016

Checklist
  • make -j8 test (UNIX), or vcbuild test nosign (Windows) passes
  • commit message follows commit guidelines
Affected core subsystem(s)

benchmark

Description of change

This removes all instances of %OptimizeFunctionOnNextCall from Node.js benchmarks. As it turns out, most bechmark benefit from its removal - they will perform better. See this table in gist.

Some of the benchmarks (buffers/buffer-swap.js, crypto/get-chiper.js, net/punnycode.js, path/parse-*.js, path/relative-*.js and tls/convertprotocols.js) benefit from warmup phase. Those are executed twice with only second run being measured. For other benchmarks warmup does not provide any advantage.

One benchmark that is slower is crypto/get-chiper.js, when calling getCiphers once. Previous version called this function once to trigger optimizations then to benchmark its performance. Results of that function are cached, so it didn’t provide valid data. This is fixed now.

Fixes: nodejs/node-chakracore#134
cc @nodejs/benchmarking

@evanlucas

This comment has been minimized.

Show comment
Hide comment
@evanlucas

evanlucas Nov 15, 2016

Member

/cc @mscdex since he's done a lot of performance work too

Member

evanlucas commented Nov 15, 2016

/cc @mscdex since he's done a lot of performance work too

@mscdex

This comment has been minimized.

Show comment
Hide comment
@mscdex

mscdex Nov 15, 2016

Contributor

I think this should only be removed for functions that are potentially inlineable due to their function size.

Contributor

mscdex commented Nov 15, 2016

I think this should only be removed for functions that are potentially inlineable due to their function size.

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Nov 24, 2016

Contributor

@mscdex For most of the benchmarks except ones mentioned in PR removing the %OptimizeFunctionOnNextCall improves the performance. Shouldn't we at least remove it for those?

Could you also elaborate on those inlineable functions?

Contributor

bzoz commented Nov 24, 2016

@mscdex For most of the benchmarks except ones mentioned in PR removing the %OptimizeFunctionOnNextCall improves the performance. Shouldn't we at least remove it for those?

Could you also elaborate on those inlineable functions?

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Nov 29, 2016

Contributor

Rebased on master, PTAL

Contributor

bzoz commented Nov 29, 2016

Rebased on master, PTAL

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Dec 5, 2016

Member

@mscdex ... any further thoughts on this?

Member

jasnell commented Dec 5, 2016

@mscdex ... any further thoughts on this?

@mscdex

This comment has been minimized.

Show comment
Hide comment
@mscdex

mscdex Dec 5, 2016

Contributor

As far as inlineability I would just check the current function lengths (there are other factors that determine inlineability of course). If the function being tested is less than 600, remove the forced optimization, otherwise leave it there.

Contributor

mscdex commented Dec 5, 2016

As far as inlineability I would just check the current function lengths (there are other factors that determine inlineability of course). If the function being tested is less than 600, remove the forced optimization, otherwise leave it there.

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Dec 9, 2016

Contributor

Most of the time %OptimizeFunctionOnNextCall is used for either small functions or functions from node.js lib. Is there any benefit to have this %Optimize..?

If we remove it, it will make some of the benchmark perform better. Furthermore, this will make benchmarks engine agnostic, so they can be used with chakracore (see nodejs/node-chakracore#134)

Contributor

bzoz commented Dec 9, 2016

Most of the time %OptimizeFunctionOnNextCall is used for either small functions or functions from node.js lib. Is there any benefit to have this %Optimize..?

If we remove it, it will make some of the benchmark perform better. Furthermore, this will make benchmarks engine agnostic, so they can be used with chakracore (see nodejs/node-chakracore#134)

@fhinkel

This comment has been minimized.

Show comment
Hide comment
@fhinkel

fhinkel Dec 9, 2016

Member

I'm pretty sure that inlineability does not depend on function length any more but bytecode size (at least in the near future).

Member

fhinkel commented Dec 9, 2016

I'm pretty sure that inlineability does not depend on function length any more but bytecode size (at least in the near future).

@fhinkel

This comment has been minimized.

Show comment
Hide comment
@fhinkel

fhinkel Dec 9, 2016

Member

I'm in favor of making the benchmarks vm agnostic. I don't see the point in measuring only the optimized runs instead of all the runs.

Member

fhinkel commented Dec 9, 2016

I'm in favor of making the benchmarks vm agnostic. I don't see the point in measuring only the optimized runs instead of all the runs.

@trevnorris

This comment has been minimized.

Show comment
Hide comment
@trevnorris

trevnorris Dec 9, 2016

Contributor

I'm pretty sure that inlineability does not depend on function length any more but bytecode size

As of 5.7.201 (and is the same on a6976211d1a4d12df815fa9ff5341dcbcf1e8036, latest master) https://github.com/v8/v8/blob/5.7.201/src/flag-definitions.h#L329-L330

Contributor

trevnorris commented Dec 9, 2016

I'm pretty sure that inlineability does not depend on function length any more but bytecode size

As of 5.7.201 (and is the same on a6976211d1a4d12df815fa9ff5341dcbcf1e8036, latest master) https://github.com/v8/v8/blob/5.7.201/src/flag-definitions.h#L329-L330

@trevnorris

This comment has been minimized.

Show comment
Hide comment
@trevnorris

trevnorris Dec 9, 2016

Contributor

I don't see the point in measuring only the optimized runs instead of all the runs.

The idea is to remove variability in the benchmarks to better compare between versions. Ideally we could run the benchmarks fully optimized and run them not allowing to optimize the code at all.

Contributor

trevnorris commented Dec 9, 2016

I don't see the point in measuring only the optimized runs instead of all the runs.

The idea is to remove variability in the benchmarks to better compare between versions. Ideally we could run the benchmarks fully optimized and run them not allowing to optimize the code at all.

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Dec 27, 2016

Contributor

@trevnorris So, are you in favor of this change? Or maybe you have other suggestions?

Contributor

bzoz commented Dec 27, 2016

@trevnorris So, are you in favor of this change? Or maybe you have other suggestions?

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Jan 5, 2017

Contributor

Rebased, PTAL

Contributor

bzoz commented Jan 5, 2017

Rebased, PTAL

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Jan 18, 2017

Contributor

Rebased again, PTAL

Contributor

bzoz commented Jan 18, 2017

Rebased again, PTAL

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz
Contributor

bzoz commented Feb 24, 2017

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Feb 24, 2017

Member

@mscdex ... Side note: I've been wondering if it would be worthwhile to introduce linting for functions that are over the inlinable size. Sure, there are quite a few instances of fns in core over that size, and I'm not suggesting that we should refactor those, just that they should require linting exceptions.

Member

jasnell commented Feb 24, 2017

@mscdex ... Side note: I've been wondering if it would be worthwhile to introduce linting for functions that are over the inlinable size. Sure, there are quite a few instances of fns in core over that size, and I'm not suggesting that we should refactor those, just that they should require linting exceptions.

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Feb 24, 2017

Member

@trevnorris ... I've had a todo on my list for a while to see if there was some way that we can build in checking optimized and non-optimized run's for all benchmarks. I definitely think it would be worthwhile.

Likewise, we should likely be making it easier to run the benchmarks in a way that compares use of the ignition toolchain vs. crankshaft.

Member

jasnell commented Feb 24, 2017

@trevnorris ... I've had a todo on my list for a while to see if there was some way that we can build in checking optimized and non-optimized run's for all benchmarks. I definitely think it would be worthwhile.

Likewise, we should likely be making it easier to run the benchmarks in a way that compares use of the ignition toolchain vs. crankshaft.

@addaleax

This comment has been minimized.

Show comment
Hide comment
@addaleax

addaleax Feb 24, 2017

Member

@jasnell Won’t that go away with Turbofan anyway (v8/v8@0702ea3)?

Member

addaleax commented Feb 24, 2017

@jasnell Won’t that go away with Turbofan anyway (v8/v8@0702ea3)?

@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Feb 24, 2017

Member

Yes, sorry, I wasn't clear... it would be useful to be able to more easily run compare.js with various combinations of command line args... e.g. something like a ./node benchmark/compare.js --old "node" --new "node --ignition" stream ...

(if this already exists and I just missed it, then just ignore me ;-) ...)

Member

jasnell commented Feb 24, 2017

Yes, sorry, I wasn't clear... it would be useful to be able to more easily run compare.js with various combinations of command line args... e.g. something like a ./node benchmark/compare.js --old "node" --new "node --ignition" stream ...

(if this already exists and I just missed it, then just ignore me ;-) ...)

@Trott

This comment has been minimized.

Show comment
Hide comment
@Trott

Trott Feb 25, 2017

Member

Refocusing the conversation: I imagine @bzoz wants to know if anyone feels good enough about this to give it an approval or else to describe clearly what changes would need to happen to get your approval. Anyone?

Member

Trott commented Feb 25, 2017

Refocusing the conversation: I imagine @bzoz wants to know if anyone feels good enough about this to give it an approval or else to describe clearly what changes would need to happen to get your approval. Anyone?

@jasnell

While I'm generally ok with the idea of removing the forced optimization, I'd rather it be done more incrementally than this... with either one benchmark or one related group of benchmarks edited per commit.

@joyeecheung

This comment has been minimized.

Show comment
Hide comment
@joyeecheung

joyeecheung Feb 25, 2017

Member

@jasnell I think the --future flag in the V8 fork does the pipeline switching? @nodejs/v8

Member

joyeecheung commented Feb 25, 2017

@jasnell I think the --future flag in the V8 fork does the pipeline switching? @nodejs/v8

@joyeecheung

This comment has been minimized.

Show comment
Hide comment
@joyeecheung

joyeecheung Feb 25, 2017

Member

Also the flag can be specified in the benchmark creation code now (see the guide), it can take a third argument for that:

const configs = {
  n: [1024],
  type: ['fast', 'slow'], 
  size: [16, 128, 1024]
};

const options = {
  flags: ['--future']
};

const bench = common.createBenchmark(main, configs, options);
Member

joyeecheung commented Feb 25, 2017

Also the flag can be specified in the benchmark creation code now (see the guide), it can take a third argument for that:

const configs = {
  n: [1024],
  type: ['fast', 'slow'], 
  size: [16, 128, 1024]
};

const options = {
  flags: ['--future']
};

const bench = common.createBenchmark(main, configs, options);
@jasnell

This comment has been minimized.

Show comment
Hide comment
@jasnell

jasnell Feb 25, 2017

Member

@joyeecheung ... yep, what I'm looking for however is an easier way to run a comparison of a single benchmark with or without a particular flag set. In any case, that's a different discussion that is separate from this PR and we shouldn't derail this PR any further :-)

Member

jasnell commented Feb 25, 2017

@joyeecheung ... yep, what I'm looking for however is an easier way to run a comparison of a single benchmark with or without a particular flag set. In any case, that's a different discussion that is separate from this PR and we shouldn't derail this PR any further :-)

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Mar 1, 2017

Contributor

@jasnell I've split PR into separate commits

Contributor

bzoz commented Mar 1, 2017

@jasnell I've split PR into separate commits

@jasnell

jasnell approved these changes Mar 1, 2017

LGTM so long as @mscdex and @nodejs/benchmarking are good with this.

@mcollina

LGTM

@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Mar 3, 2017

Contributor

CI failures are unrelated.

Thanks for the reviews. If no one objects I'll land this on Monday.

Contributor

bzoz commented Mar 3, 2017

CI failures are unrelated.

Thanks for the reviews. If no one objects I'll land this on Monday.

bzoz added some commits Feb 28, 2017

benchmark: remove forced optimization from buffer
This removes all instances of %OptimizeFunctionOnNextCall from buffer
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
benchmark: remove forced optimization from crypto
This removes all instances of %OptimizeFunctionOnNextCall from crypto
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
benchmark: remove forced optimization from es
This removes all instances of %OptimizeFunctionOnNextCall from es
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
benchmark: remove forced optimization from misc
This removes all instances of %OptimizeFunctionOnNextCall from misc
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
benchmark: remove forced optimization from path
This removes all instances of %OptimizeFunctionOnNextCall from path
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
benchmark: remove querystring forced optimization
This removes all instances of %OptimizeFunctionOnNextCall from
querystring benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
benchmark: remove streams forced optimization
This removes all instances of %OptimizeFunctionOnNextCall from streams
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>

bzoz added a commit that referenced this pull request Mar 6, 2017

benchmark: remove streams forced optimization
This removes all instances of %OptimizeFunctionOnNextCall from streams
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>

bzoz added a commit that referenced this pull request Mar 6, 2017

benchmark: remove forced optimization from tls
This removes all instances of %OptimizeFunctionOnNextCall from tls
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>

bzoz added a commit that referenced this pull request Mar 6, 2017

benchmark: remove forced optimization from url
This removes all instances of %OptimizeFunctionOnNextCall from url
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>

bzoz added a commit that referenced this pull request Mar 6, 2017

benchmark: remove forced optimization from util
This removes all instances of %OptimizeFunctionOnNextCall from util
benchmarks

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>

bzoz added a commit that referenced this pull request Mar 6, 2017

benchmark: cleanup after forced optimization drop
This removes all instances of %OptimizeFunctionOnNextCall from common.js
and README.md

PR-URL: #9615
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Mar 6, 2017

Contributor

Landed in 672d752...75cdc89

Contributor

bzoz commented Mar 6, 2017

Landed in 672d752...75cdc89

@evanlucas

This comment has been minimized.

Show comment
Hide comment
@evanlucas

evanlucas Mar 7, 2017

Member

This isn't landing cleanly on v7.x-staging. Mind submitting a backport PR?

Member

evanlucas commented Mar 7, 2017

This isn't landing cleanly on v7.x-staging. Mind submitting a backport PR?

bzoz added a commit to janeasystems/node that referenced this pull request Mar 8, 2017

benchmark: remove benchmarks forced optimizations
Removes all instances of %OptimizeFunctionOnNextCall from benchmarks

Refs: nodejs#9615
Refs: nodejs#11720
@bzoz

This comment has been minimized.

Show comment
Hide comment
@bzoz

bzoz Mar 8, 2017

Contributor

@evanlucas backport here: #11744

Contributor

bzoz commented Mar 8, 2017

@evanlucas backport here: #11744

@targos targos referenced this pull request Mar 17, 2017

Closed

vm benchmarks are broken #11895

@lucaslago lucaslago referenced this pull request Mar 17, 2017

Closed

benchmark: remove v8ForceOptimization calls #11908

3 of 3 tasks complete

italoacasas added a commit to italoacasas/node that referenced this pull request Mar 21, 2017

benchmark: remove benchmarks forced optimizations
Removes all instances of %OptimizeFunctionOnNextCall from benchmarks

Refs: nodejs#9615
Refs: nodejs#11720

@jasnell jasnell referenced this pull request Apr 4, 2017

Closed

8.0.0 Release Proposal #12220

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment