Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variety of speed ups for parsing mappings #186

Merged
merged 5 commits into from Jun 26, 2015
Merged

Conversation

@fitzgen
Copy link
Contributor

@fitzgen fitzgen commented Jun 18, 2015

This builds on top of #185

This is a variety of speed ups to parsing mappings. Most notable is rolling our own sorting in JS. See

// It turns out that some (most?) JavaScript engines don't self-host
// `Array.prototype.sort`. This makes sense because C++ will likely remain
// faster than JS when doing raw CPU-intensive sorting. However, when using a
// custom comparator function, calling back and forth between the VM's C++ and
// JIT'd JS is rather slow *and* loses JIT type information, resulting in
// worse generated code for the comparator function than would be optimal. In
// fact, when sorting with a comparator, these costs outweigh the benefits of
// sorting in C++. By using our own JS-implemented Quick Sort (below), we get
// a ~3500ms mean speed-up in `bench/bench.html`.
for a description of why that is.

r? @ejpbruel

@fitzgen fitzgen force-pushed the fitzgen:jit-profiling branch from fff8e9a to 245d081 Jun 18, 2015
@@ -10,7 +10,7 @@ function benchmark(name, setup, action) {

// Warm up the JIT.
var start = Date.now();
while ((Date.now() - start) < 10000 /* 10 seconds */) {
while ((Date.now() - start) < 5000 /* 5 seconds */) {

This comment has been minimized.

@ejpbruel

ejpbruel Jun 24, 2015
Contributor

This seems like it would be race condition prone, since the results are timing dependent. Is there a better way to warm up the JIT, that does not suffer from timing effects?

This comment has been minimized.

@fitzgen

fitzgen Jun 26, 2015
Author Contributor

Unfortunately, because each parse takes so long, we can't just run it n times before measuring, and I don't think there is a better way.

* @param {Number} r
* End index of the array
*/
function doQuickSort(ary, comparator, p, r) {

This comment has been minimized.

@ejpbruel

ejpbruel Jun 24, 2015
Contributor

Function calls are supposed to be quite expensive in JavaScript. I wonder if we could do even better here by using a non-recursive merge sort, or a similar non-recursive sorting algorithm.

This comment has been minimized.

@fitzgen

fitzgen Jun 26, 2015
Author Contributor

Yes, and the jit coach is pointing out that these calls aren't getting inlined, and for some reason doQuickSort often stays in baseline. Very surprising given the performance gains it still manages to provide! Definitely a place to investigate more.

* @param {Number} r
* End index of the array.
*/
function partition(ary, comparator, p, r) {

This comment has been minimized.

@ejpbruel

ejpbruel Jun 24, 2015
Contributor

Why don't you inline this function? It seems to be only used from one place. We could save the overhead of a single function call for each recursive step.

@ejpbruel
Copy link
Contributor

@ejpbruel ejpbruel commented Jun 24, 2015

Patch looks good to me. r+

These are great improvements. It's quite fun how we're scraping the barrel of JS performance with this library :-)

fitzgen added 5 commits Jun 18, 2015
It turns out that some (most?) JavaScript engines don't self-host
`Array.prototype.sort`. This makes sense because C++ will likely remain faster
than JS when doing raw CPU-intensive sorting. However, when using a custom
comparator function, calling back and forth between the VM's C++ and JIT'd JS is
rather slow *and* loses JIT type information, resulting in worse generated code
for the comparator function than would be optimal. In fact, when sorting with a
comparator, these costs outweigh the benefits of sorting in C++. By using our
own JS-implemented Quick Sort (below), we get a ~3500ms mean speed-up in
`bench/bench.html`.
@fitzgen fitzgen force-pushed the fitzgen:jit-profiling branch from 245d081 to 095a1eb Jun 26, 2015
fitzgen added a commit that referenced this pull request Jun 26, 2015
Variety of speed ups for parsing mappings
@fitzgen fitzgen merged commit d81aed9 into mozilla:master Jun 26, 2015
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants
You can’t perform that action at this time.