New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #48

Merged
merged 15 commits into from Apr 3, 2017

Conversation

Projects
None yet
2 participants
@sbj42
Contributor

sbj42 commented Mar 31, 2017

This patch adds a simple benchmark suite using karma-benchmark. A separate karma configuration file is necessary, to separate the unit tests from the benchmarks. npm run benchmark to start the karma server on the benchmark suite.

There are two kinds of benchmarks at the moment: One is maze-like, so the search will be deep but narrow. The other is a mostly open field, so the search will be short but with more branches.

For both cases, the benchmarks are run three different ways: With no diagonals, diagonals but no corner-cutting, and diagonals with corner-cutting.

All of these benchmarks use enableSync. They do not use setAdditionalPointCost, setDirectionalCondition, or avoidAdditionalPoint.

When I ran these benchmarks in Chrome 56.0.2924 (Windows 10 0.0.0), I got the following results (but of course they'll differ from environment to environment):

                        EasyStar.js
- 10x10 maze no diagonals                      3799 ops/sec
- 10x10 maze diagonals but no corner-cutting   2716 ops/sec
- 10x10 maze diagonals and corner-cutting      2523 ops/sec
- 10x10 field no diagonals                     3319 ops/sec
- 10x10 field diagonals but no corner-cutting  6108 ops/sec
- 10x10 field diagonals and corner-cutting     6875 ops/sec

sbj42 and others added some commits Mar 26, 2017

Issue #46: Add ability to cancel an instance
`findPath()` now returns a unique number (even across multiple instances of easystar).  There's a new method `cancelPath()` which can be used to cancel a path calculation.

This can be used to avoid wasting effort, for instance if a game actor dies or changes its destination before the path is computed.
Missed a spot renaming _nextId.
Add a test case confirming that two instances have different IDs, which might have caught this mistake.
Convert instances from array to map
Add instanceQueue to maintain the calculation order.  This change should make cancelling paths O(1) rather than O(n).
Add karma-benchmark, and some benchmarks.
There are two grids benchmarked:  One is maze-like, so the search will be deep but narrow.  The other is a mostly open field, so the search will be short but with more branches.

For both grids, the benchmarks are run three different ways: With no diagonals, diagonals but no corner-cutting, and diagonals with corner-cutting.
Add karma-benchmark, and some benchmarks.
There are two grids benchmarked:  One is maze-like, so the search will be deep but narrow.  The other is a mostly open field, so the search will be short but with more branches.

For both grids, the benchmarks are run three different ways: With no diagonals, diagonals but no corner-cutting, and diagonals with corner-cutting.
Merge pull request #47 from sbj42/issue46
Add ability to cancel an instance (issue #46)
@prettymuchbryce

This comment has been minimized.

Show comment
Hide comment
@prettymuchbryce

prettymuchbryce Mar 31, 2017

Owner

I like the idea of having benchmarks, especially if it can make it easier for us to tell if there has been some unexpected decrease in performance between changes.

Am I right in assuming a single op is considered to be one complete findPath()?

Do you know why I might be seeing this error when trying to run them?

- 10x10 maze no diagonals                      2189 ops/sec
- 10x10 maze diagonals but no corner-cutting   1954 ops/sec
- 10x10 maze diagonals and corner-cutting      1930 ops/sec
- 10x10 field no diagonals                     1993 ops/sec
- 10x10 field diagonals but no corner-cutting  1997 ops/sec
Chrome 56.0.2924 (Mac OS X 10.12.3) ERROR
  Uncaught RangeError: Maximum call stack size exceeded
  at /Users/bryceneal/projects/sbj42easystar/bin/easystar-0.3.1.js:270

Also do you think it makes sense to run these in node and not require a web browser? I was also thinking of eventually migrating the other tests to node to make it easier to CI.

Owner

prettymuchbryce commented Mar 31, 2017

I like the idea of having benchmarks, especially if it can make it easier for us to tell if there has been some unexpected decrease in performance between changes.

Am I right in assuming a single op is considered to be one complete findPath()?

Do you know why I might be seeing this error when trying to run them?

- 10x10 maze no diagonals                      2189 ops/sec
- 10x10 maze diagonals but no corner-cutting   1954 ops/sec
- 10x10 maze diagonals and corner-cutting      1930 ops/sec
- 10x10 field no diagonals                     1993 ops/sec
- 10x10 field diagonals but no corner-cutting  1997 ops/sec
Chrome 56.0.2924 (Mac OS X 10.12.3) ERROR
  Uncaught RangeError: Maximum call stack size exceeded
  at /Users/bryceneal/projects/sbj42easystar/bin/easystar-0.3.1.js:270

Also do you think it makes sense to run these in node and not require a web browser? I was also thinking of eventually migrating the other tests to node to make it easier to CI.

prettymuchbryce and others added some commits Mar 31, 2017

Use synchronous benchmarks
Because these benchmarks use `enableAsync` there is no need to use the benchmarkjs `defer` flag.  This seems to avoid potential "Maximum call stack size exceeded" errors when the benchmarks are fast (possible bug in karma-benchmark).
@sbj42

This comment has been minimized.

Show comment
Hide comment
@sbj42

sbj42 Mar 31, 2017

Contributor

Yeah I was seeing that myself, the last commit should fix it. I think karma-benchmark (or maybe benchmarkjs) assumes that when you resolve the deferred, you did it after some timeout, because it immediately calls the next benchmark.

That means if the benchmark is fast enough you blow out the stack before it finishes. So I just switched them all to synchronous benchmark functions.

Contributor

sbj42 commented Mar 31, 2017

Yeah I was seeing that myself, the last commit should fix it. I think karma-benchmark (or maybe benchmarkjs) assumes that when you resolve the deferred, you did it after some timeout, because it immediately calls the next benchmark.

That means if the benchmark is fast enough you blow out the stack before it finishes. So I just switched them all to synchronous benchmark functions.

@sbj42

This comment has been minimized.

Show comment
Hide comment
@sbj42

sbj42 Mar 31, 2017

Contributor

It probably doesn't make a big difference which test framework you go with, but maybe it's just slightly easier for someone to get on board if they can just npm test and don't have to fiddle with a browser. I don't know much about setting up CI.

Contributor

sbj42 commented Mar 31, 2017

It probably doesn't make a big difference which test framework you go with, but maybe it's just slightly easier for someone to get on board if they can just npm test and don't have to fiddle with a browser. I don't know much about setting up CI.

@sbj42

This comment has been minimized.

Show comment
Hide comment
@sbj42

sbj42 Mar 31, 2017

Contributor

Am I right in assuming a single op is considered to be one complete findPath()?

Yes.

FYI, I set up the benchmarking because I think I see some easy opportunities for performance improvements. Mainly in avoiding the something[x + '_' + y] pattern - string concatenation for a combined lookup seems to be slower than two separate lookups.

Contributor

sbj42 commented Mar 31, 2017

Am I right in assuming a single op is considered to be one complete findPath()?

Yes.

FYI, I set up the benchmarking because I think I see some easy opportunities for performance improvements. Mainly in avoiding the something[x + '_' + y] pattern - string concatenation for a combined lookup seems to be slower than two separate lookups.

@prettymuchbryce

This comment has been minimized.

Show comment
Hide comment
@prettymuchbryce

prettymuchbryce Mar 31, 2017

Owner

Nice it seems better now.

                        EasyStar.js
- 10x10 maze no diagonals                      4003 ops/sec
- 10x10 maze diagonals but no corner-cutting   3101 ops/sec
- 10x10 maze diagonals and corner-cutting      3028 ops/sec
- 10x10 field no diagonals                     3822 ops/sec
- 10x10 field diagonals but no corner-cutting  7166 ops/sec
- 10x10 field diagonals and corner-cutting     8378 ops/sec

                        EasyStar.js
- 10x10 maze no diagonals                      4216 ops/sec
- 10x10 maze diagonals but no corner-cutting   3125 ops/sec
- 10x10 maze diagonals and corner-cutting      2950 ops/sec
- 10x10 field no diagonals                     3642 ops/sec
- 10x10 field diagonals but no corner-cutting  6999 ops/sec
- 10x10 field diagonals and corner-cutting     8526 ops/sec

                        EasyStar.js
- 10x10 maze no diagonals                      3805 ops/sec
- 10x10 maze diagonals but no corner-cutting   3143 ops/sec
- 10x10 maze diagonals and corner-cutting      3009 ops/sec
- 10x10 field no diagonals                     3818 ops/sec
- 10x10 field diagonals but no corner-cutting  7265 ops/sec
- 10x10 field diagonals and corner-cutting     8430 ops/sec

If you have ideas around quick performance wins I'm all for that. 👍

Owner

prettymuchbryce commented Mar 31, 2017

Nice it seems better now.

                        EasyStar.js
- 10x10 maze no diagonals                      4003 ops/sec
- 10x10 maze diagonals but no corner-cutting   3101 ops/sec
- 10x10 maze diagonals and corner-cutting      3028 ops/sec
- 10x10 field no diagonals                     3822 ops/sec
- 10x10 field diagonals but no corner-cutting  7166 ops/sec
- 10x10 field diagonals and corner-cutting     8378 ops/sec

                        EasyStar.js
- 10x10 maze no diagonals                      4216 ops/sec
- 10x10 maze diagonals but no corner-cutting   3125 ops/sec
- 10x10 maze diagonals and corner-cutting      2950 ops/sec
- 10x10 field no diagonals                     3642 ops/sec
- 10x10 field diagonals but no corner-cutting  6999 ops/sec
- 10x10 field diagonals and corner-cutting     8526 ops/sec

                        EasyStar.js
- 10x10 maze no diagonals                      3805 ops/sec
- 10x10 maze diagonals but no corner-cutting   3143 ops/sec
- 10x10 maze diagonals and corner-cutting      3009 ops/sec
- 10x10 field no diagonals                     3818 ops/sec
- 10x10 field diagonals but no corner-cutting  7265 ops/sec
- 10x10 field diagonals and corner-cutting     8430 ops/sec

If you have ideas around quick performance wins I'm all for that. 👍

@prettymuchbryce

This comment has been minimized.

Show comment
Hide comment
@prettymuchbryce

prettymuchbryce Mar 31, 2017

Owner

Kind of interesting that the performance varies quite a bit between tests.

Owner

prettymuchbryce commented Mar 31, 2017

Kind of interesting that the performance varies quite a bit between tests.

@sbj42

This comment has been minimized.

Show comment
Hide comment
@sbj42

sbj42 Mar 31, 2017

Contributor

Yeah benchmarking is a dark art. As far as I can tell you can only judge the performance by the general order of magnitude.

Contributor

sbj42 commented Mar 31, 2017

Yeah benchmarking is a dark art. As far as I can tell you can only judge the performance by the general order of magnitude.

@sbj42

This comment has been minimized.

Show comment
Hide comment
@sbj42

sbj42 Mar 31, 2017

Contributor

I think you'll want at least one "big" grid benchmark, to make sure any performance improvements don't only help with little grids. Let me add one of those before you merge this.

Contributor

sbj42 commented Mar 31, 2017

I think you'll want at least one "big" grid benchmark, to make sure any performance improvements don't only help with little grids. Let me add one of those before you merge this.

sbj42 added some commits Mar 31, 2017

Add karma-benchmark, and some benchmarks.
There are two grids benchmarked:  One is maze-like, so the search will be deep but narrow.  The other is a mostly open field, so the search will be short but with more branches.

For both grids, the benchmarks are run three different ways: With no diagonals, diagonals but no corner-cutting, and diagonals with corner-cutting.
Use synchronous benchmarks
Because these benchmarks use `enableAsync` there is no need to use the benchmarkjs `defer` flag.  This seems to avoid potential "Maximum call stack size exceeded" errors when the benchmarks are fast (possible bug in karma-benchmark).
Add a 1000x1000 benchmark.
This pushes the limits of `karma-benchmarkjs-reporter`, in that it doesn't show precise results when ops take more than one second.

But a test like this is important to avoid optimizing the small-grid case at the expense of the large-grid case.
@sbj42

This comment has been minimized.

Show comment
Hide comment
@sbj42

sbj42 Mar 31, 2017

Contributor

Ok I'm all set here.

Not sure if I did the fetch right to rebase this off of the head of your master. If you decide to pull this and it doesn't seem right, let me know and I'll close this PR and try again with a cleaner rebase.

Contributor

sbj42 commented Mar 31, 2017

Ok I'm all set here.

Not sure if I did the fetch right to rebase this off of the head of your master. If you decide to pull this and it doesn't seem right, let me know and I'll close this PR and try again with a cleaner rebase.

@prettymuchbryce

This comment has been minimized.

Show comment
Hide comment
@prettymuchbryce

prettymuchbryce Mar 31, 2017

Owner

Can you remove the changes to files in /bin/? Really these should only reflect the version specified in the file name.

Owner

prettymuchbryce commented Mar 31, 2017

Can you remove the changes to files in /bin/? Really these should only reflect the version specified in the file name.

@prettymuchbryce prettymuchbryce merged commit aa30ae6 into prettymuchbryce:master Apr 3, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment