Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Handles #9

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
Open

WIP: Handles #9

wants to merge 13 commits into from

Conversation

bergus
Copy link
Contributor

@bergus bergus commented Jun 23, 2016

Hi,
you might want to have a look at this approach for keeping as few references as possibles around. The near approach in general is nice, but requires to actually install handlers on the promise. If the promise with a deep become chain is only kept around, this doesn't work.
Best thing: I even wrote tests for this :-) The current master branch fails them.

I don't actually want you to merge this - it's a quite radical redesign - only discuss it as a proof-of-concept.

@briancavalier
Copy link
Owner

Sounds interesting, @bergus. I haven't had a chance to look at it in detail yet (particularly interested in the tests), but hopefully later tonight.

Can you look into why travis is failing?

@bergus
Copy link
Contributor Author

bergus commented Jun 23, 2016

Oh, I have time to wait, I'm not actively pursuing this path in my master branch (where cancellation is implemented) anyway.

Can you look into why travis is failing?

Apparently related to 332e937 - which appears to make the build script work only under Windows now :-/ I really didn't know better, make-dist-dir was the first hack that worked for me.

@coveralls
Copy link

coveralls commented Jun 23, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling f10cde4 on bergus:handles into 6e4fe0f on briancavalier:master.

@bergus
Copy link
Contributor Author

bergus commented Jun 23, 2016

OK, using 490bf85 instead fixes the build problem

@briancavalier
Copy link
Owner

Ah, cool, thanks for fixing travis. I'm about to dive into this.

import { resolve, reject, fulfill, never } from '../src/main'
import getCounts, { collect, formatResults, sumResults } from './lib/refcount-util'

describe('reference counts', () => {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give me the rundown in a bit more detail what this is testing, and what the Handle approach "fixes" or improves? I see the words "reference counts", but this test and refcount-util are quite dense :) I see it counting something and then asserting the count against a threshold, but it's difficult to understand exactly what's going on.

Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to have a look at the commented "should be interesting" test cases. Uncomment them and run them, maybe even replace that sumResults callback with log.

The refcount is based on what a garbage collector would do to see which objects have to be retained in memory. Certain "root" objects are unclaimable: the global scope, all scopes currently on the stack, and any objects (typically callbacks) that ongoing asynchronous operations cling onto. It then has to keep everything that is referenced by them. The collect method does quite this - you pass in sources, and it counts all the objects referenced by them, giving an overview by constructor name (which you can filter with targets). To avoid counting multiple times, it keeps a WeakSet. (Actually now that I'm rethinking it, one could just have put everything into a Set first and then iterated that for the count). Because creed uses objects everywhere instead of callbacks, we can inspect everything
In a promise library, the sources are typically a) the promise that marks the end of the chain, which could be stored somewhere (for example in a cache). This is what getCounts stores as result.
And there are the resolver callbacks of running async operations referencing the promises they resolve. This is were monitoredDelay comes in, which can be used to simulate an async action (taking a fixed 5ms). A getCount call allows to simulate one run of a promise chain - it returns a future that is resolved when no more delays are opened. The monitoredTimeout keeps track of these currently ongoing actions, adding the values which the "closure" captures (the delay promise and its result) to the globals set and removing them when the timeout is done.
In one run of getCounts, right before and after every delay is fulfilled, as well as after the initial execution, counts of objects are collected with either result, globals or the current captures for the source, then passed to the report callback for analisation and stored in results. For entertainment(?), collect does not only count how many objects of which type it sees, but also how many of them have already been encountered before during this run and how many are new since the last collection.

I hope the recursions to build the respective "chains" should be self-explaining. Use the log to get an intuition for the objects being created over time. The assertions test the counts not to get over some threshold.

@briancavalier
Copy link
Owner

A few observations about the effect of this branch so far on performance. Here are some representative results of the doxbee test (in perf/) from master:

results for 10000 parallel executions, 1 ms per I/O op

file                                 time(ms)  memory(MB)
promises-creed-generator.js               199       27.89
callbacks-baseline.js                     214       36.69
promises-bluebird-generator.js            251       32.02
promises-cujojs-when-generator.js         263       42.81
promises-creed.js                         337       52.95
promises-creed-algebraic.js               343       51.78
promises-bluebird.js                      351       50.79
promises-cujojs-when.js                   423       64.81

And from this branch:

results for 10000 parallel executions, 1 ms per I/O op

file                                 time(ms)  memory(MB)
callbacks-baseline.js                     200       36.79
promises-bluebird-generator.js            266       31.74
promises-cujojs-when-generator.js         289       39.18
promises-creed-algebraic.js               342       50.40
promises-creed.js                         350       51.60
promises-creed-generator.js               356       43.86
promises-bluebird.js                      356       49.77
promises-cujojs-when.js                   459       64.92

Interestingly, memory usage has gone down ever so slightly for the normal promise and algebraic tests. They've taken a slight cpu hit. However, coroutine/generator memory usage has gone up significantly as well as taking a significant cpu hit.

Some of the conflation of objects (like Future having a run() method) was to reduce object allocations. I'm guessing that there are more object allocations going on now. I wonder if introducing inheritance into the actions has also had an effect.

Anyway, I realize this PR isn't your ultimate goal, and cancellation is. I just wanted to see what effect this new Handle architecture might have on memory usage in the tests. If creed can be more GC friendly and still retain (approximately) the perf characteristics in master, that'd be a big win.

Maybe once I understand the tests here a bit more, we can figure out what to do :)

@bergus
Copy link
Contributor Author

bergus commented Jun 24, 2016

OK, the problem with generators is surely the

this.next = iterator.next.bind(iterator)
this.throw = iterator.throw.bind(iterator)

I have done to simplify and reduce code duplication. I argued to myself that native bind would have to be fast, apparently it is not :-/
Also the concept of handles to store result pointers does not allow an Action (in this case the Coroutine) to be used on multiple different promises, which leads to more "empty" handles being generated. I'll have a look whether I can fix that.

Some of the conflation of objects (like Future having a run() method) was to reduce object allocations. […] I wonder if introducing inheritance into the actions has also had an effect.

I don't think so. There still is usually only one Future and one Action (which is a subclass of Handle) per chaining method. Basically I moved ref (and the array-like storage) onto the handle, to achieve the memory layout I needed to pass the scenario in the garbage-test.

Also I moved run into the handle, which seemed more fitting now. Oh wait, in fact it should be possible now to drop Continuation now, I'll have to try that.

@coveralls
Copy link

coveralls commented Jun 24, 2016

Coverage Status

Coverage decreased (-0.3%) to 99.682% when pulling 5d9f7cd on bergus:handles into 6e4fe0f on briancavalier:master.

@bergus
Copy link
Contributor Author

bergus commented Jun 24, 2016

Oh, running these perf tests is simpler than I thought :-) I'm not sure though how representative the tests are, and (at least on my machine) the results can vary significantly between runs (the only consistency was the abysmal performance of RSVP, Co, Async and Natives).

But it looks we're back in game - here are results from a18de4f (master)

file                                 time(ms)  memory(MB)  time(ms)  memory(MB)  time(ms)  memory(MB)  time(ms)  memory(MB)
callbacks-baseline.js                     182       30.95       171       31.01       181       31.04       170       30.94
promises-bluebird-generator.js            536       37.10       717       37.42       535       37.56       574       37.64
promises-bluebird.js                      547       49.18       561       49.13       560       48.31       632       49.15
promises-cujojs-when-generator.js         551       40.66       751       38.55       554       41.50       534       41.47
promises-cujojs-when.js                   588       62.50       634       58.40       606       58.58       608       62.47
promises-creed-generator.js               605       35.76       637       35.18       622       35.79       516       35.43
promises-creed.js                         688       47.90       830       47.98       690       47.79       569       47.91
promises-creed-algebraic.js              1113       49.53       610       49.62      1179       49.57       578       49.59

and from f10cde4 (with the problematic generators)

promises-creed-generator.js               698       36.53       879       36.51      846       36.38

but now (at 5d9f7cd) it looks like this, which seems better than ever:

file                                 time(ms)  memory(MB)  time(ms)  memory(MB)  time(ms)  memory(MB)
callbacks-baseline.js                     194       31.04       158       30.83       163       30.76
promises-bluebird-generator.js            527       37.50       572       37.09       568       37.03
promises-bluebird.js                      552       49.16       646       48.22       529       49.25
promises-cujojs-when-generator.js         569       38.42       595       38.93       560       41.77
promises-cujojs-when.js                   605       62.47       607       62.64       596       58.56
promises-creed-generator.js               577       35.21       539       35.38       522       34.36
promises-creed.js                         596       47.86       599       49.89       681       47.77
promises-creed-algebraic.js               655       49.20       591       49.28       595       49.46

@coveralls
Copy link

coveralls commented Jun 25, 2016

Coverage Status

Coverage decreased (-0.3%) to 99.682% when pulling 268ef61 on bergus:handles into 6e4fe0f on briancavalier:master.

@bergus
Copy link
Contributor Author

bergus commented Jun 25, 2016

I tried to do some profiling and saw Babel's _classCallCheck quite high in the ranks. So I wondered what happens if we used ES6 classes natively in Node 6… Especially as they are still considered rather slow. Well, this is the result:

                                     time ES6  memory ES6 | time ES5  memory ES5
promises-creed-generator.js               441       33.35 |      675       34.96
promises-creed.js                         459       47.87 |      618       46.81
promises-creed-algebraic.js               474       48.79 |      621       45.38

@briancavalier
Copy link
Owner

@bergus Thanks for explaining the GC tests. It's pretty cool to see how you're testing it now that I understand :)

I found some of the same issues with babel 5's ES class helpers. That's why I ended up going with loose ES class support in the build. The results with native node 6 classes are really encouraging! I tried it locally on master and also got extremely good results (creed non-generator versions performed better than bluebird generator version, which I had previously believed to be impossible).

Any thoughts on how creed could ship both versions: one that is ES5 compatible, and one for envs where the required ES6 features (maybe at least classes) are available? I know rollup supports jsnext:main in package.json, but I don't know how many people even know about it. I guess that's an option, tho: ship ES6 source with jsnext:main alongside the ES5 dist version with main.

On a related note, at some point, I'd love to update to babel 6. I will probably have time in the next couple of weeks to do it.

How's your cancellation POC coming along?

@briancavalier
Copy link
Owner

As for the perf test results having high variability ... yeah, they've always been that way, though I don't tend to see as much variability as the results you pasted. The tests came from the bluebird repo, and are intended to be more "real world" than typical micro benchmarks. I mostly use them as a smoke test, and also to prevent large perf regressions.

The variability could just be operating system related. I also try to run them with almost nothing else running, but always end up running them many times to make sure I don't get fooled by one lucky or unlucky outlier.

@bergus
Copy link
Contributor Author

bergus commented Jun 26, 2016

creed non-generator versions performed better than bluebird generator version, which I had previously believed to be impossible

Yay :-) Same here. I wonder though why memory increased for them from ES5 to ES6.

Any thoughts on how creed could ship both versions

None. Maybe ask the maintainers of rollup for insights, I guess promoting jsnext:main in the Creed docs would be the way to go.

How's your cancellation POC coming along?

Pretty well, see my master branch. I guess most things should be working already, I have to work on test coverage now though.
The code is pretty ugly, maybe I should have started with some more layers of indirection instead of going for the optimised structure.

The variability could just be operating system related.

I should try without my browser eating the whole system memory :-)
Yeah, the benchmarks are good for smoke detection, but I'd like to have more diverse tests (especially not-so-flat chains). Maybe I can do something on that end as well…

@coveralls
Copy link

coveralls commented Jul 3, 2016

Coverage Status

Coverage decreased (-57.6%) to 42.424% when pulling 3f4d6d3 on bergus:handles into 6e4fe0f on briancavalier:master.

@@ -1,21 +1,22 @@
'use strict'
import { silenceError, isHandled } from './inspect'
import { silenceError } from './Promise' // deferred
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh @coveralls, why don't you like circular dependencies? Everything builds, tests and covers fine on my machine.

@coveralls
Copy link

coveralls commented Jul 5, 2016

Coverage Status

Coverage decreased (-0.2%) to 99.809% when pulling 4d3e8a1 on bergus:handles into 6e4fe0f on briancavalier:master.

@@ -1,14 +1,16 @@
import { describe, it } from 'mocha'
import '../src/Promise'
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this was why. Indeed circular dependencies are error-prone.
Hm, does silenceError actually need to use an Action and call _runAction to silence the error? It seems it's only called on Rejected instances directly anyway.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does silenceError actually need to use an Action and call _runAction to silence the error?

Not really. My original intent was for silenceError to be safely callable on any promise, which is why it works the way it does (e.g. you can call it safely on a pending promise that eventually rejects). It's not exported in the public API, tho, so we can change how it works internally if that will simplify things.

Copy link
Contributor Author

@bergus bergus Jul 5, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I factored out some commits that would be useful for creed even without cancellation or handles into my bergus/creed:master branch (guess I'll make a PR).
In cancellation, an Action needs a truthy .promise property to be executed, and for the static silencer I chose never() to be that property (it could have been anything but I wanted a promise type), which created the circular dependency between ErrorHandler -> silenceError -> silencer -> never -> Promise -> ErrorHandler… It's fine as long as Promise.js is included first and the global errorHandler can be instantiated. I wasn't particularly happy but it worked :-/
I understood your intentions (and a public .silence() method might be more performant and expressive than .catch(()=>{})) but I wasn't sure whether silenceError might have been called on futures somewhere (and didn't check). In none of the test cases silenceError is called on anything else, but that's not a proof. If you can assert it's not used otherwise, I should probably rewrite this indeed.
Btw, error-tracking might need a major rewrite anyway to be ES7-compatible. The event names for unhandled rejections are standardised now by node and HTML5, and ES7 mandates that the leafs of rejected promise chains are reported not the root.

* fix quotes of argument to uglify
* not sure whether specifying the path node_modules/mocha/bin/ is an
  antipattern (some comments by npm authors claim so) but istanbul does
  not seem to take the PATH into account
just switch "main" of package.json to "dist/creed.node.js"
benchmarks will automatically use it
* No more dependency injection of `fulfill`/`resolve` functions
* moved around code a bit, especially utils, iteration stuff, combinator functions
* simplified some exports that now export a function, not a function factory
* Some circular dependencies, mostly completely declarative
  - Promise - ErrorHandler, inspect: silenceError
  - Promise - combinators: Future, never, silenceError / race
* tests always import `main` unless they test internals
Store `.ref`erences on the `Action`s (now generalised as `Handle`s)
instead of directly on the `Future`s. That way, when a handle is passed
down `near()` to the eventual resolution, even the reference from the
outermost future is only two hops (`.handle.ref`) instead of many.
Inspecting the object reference graph with a simple DFS through properties
is possible as Creed does not use any closures for its internals
The tests are currently failing :-)
by shortening `.handle` reference chains

Extending test suite to get coverage for such cases
that I introduced with 0100f8f but did not notice
to optimise Coroutines. Extra Handles will now only be created when a second
handler is introduced on a Future, not only because the handler's ref
doesn't fit. Also, "unfitting refs" are no more at all :-)
avoiding checks for possibly not existing `.handle` property
introducing a new `near`-like method
hopefully a perf/memory improvement :-)
@coveralls
Copy link

coveralls commented Jul 11, 2016

Coverage Status

Coverage decreased (-0.3%) to 99.654% when pulling a93506f on bergus:handles into 008b3b9 on briancavalier:master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants