Glitch-free updates #272

Closed
raimohanska opened this Issue Oct 30, 2013 · 22 comments

Projects

None yet

2 participants

@raimohanska
Contributor

A glitch in FRP is a temporary violation of an invariant.

For instance, if you use Bacon.js 0.6.x and run this

a = Bacon.sequentially(10, [1, 2])
b = a.map((x) -> x * 2)
c = Bacon.combineAsArray([a, b])
c.log()

You'll get

[ 1, 2 ]
[ 2, 2 ]
[ 2, 4 ]

where the pair [2,2] is a glitch that occurs because c is updated first by the changed value of a and the the changed value of b. In glitch-free FRP you should get just

[ 1, 2 ]
[ 2, 4 ]

In the feature/dependency-graph-dispatch I'm working on an improved event dispatching mechanism that depends on a dependency-tree to guarantee glitch-free updates. This will improve the reliability of all the combine* methods as well as sampledBy and takeUntil.

I'm planning to release version 0.7.0 with this new dispatch mechanism. Currently, it's still work-in-progress and I'd be very glad to get comments and/or pull requests to get more test coverage and fix possible bugs.

@raimohanska
Contributor

TODO : consider re-write of takeWhile, current impl is quite ugly (seems correct though)

@raimohanska
Contributor

Added a perf test for deeply nested combineTemplate structure. Here are the full perf test result for both branches (updated for revision 63890eb).

master

zip x 3,204 ops/sec ±2.69% (83 runs sampled)
Bacon.combineTemplate.sample x 569 ops/sec ±2.77% (82 runs sampled)
Bacon.combineTemplate (deep) x 25.21 ops/sec ±2.82% (47 runs sampled)
Bacon.combineTemplate x 291 ops/sec ±2.36% (85 runs sampled)
EventStream.map x 6,569 ops/sec ±2.96% (80 runs sampled)
EventStream.scan x 2,933 ops/sec ±3.05% (85 runs sampled)
EventStream.toProperty x 3,007 ops/sec ±3.15% (89 runs sampled)

feature/dependency-graph-dispatch

zip x 2,624 ops/sec ±4.06% (82 runs sampled)
Bacon.combineTemplate.sample x 568 ops/sec ±3.08% (80 runs sampled)
Bacon.combineTemplate (deep) x 29.33 ops/sec ±2.40% (53 runs sampled)
Bacon.combineTemplate x 343 ops/sec ±2.45% (86 runs sampled)
EventStream.map x 5,360 ops/sec ±2.84% (84 runs sampled)
EventStream.scan x 4,697 ops/sec ±2.92% (80 runs sampled)
EventStream.toProperty x 4,888 ops/sec ±2.96% (81 runs sampled)

The difference is not dramatical at all.

One interesting thing is that dependency tracking does not affect the results, i.e. if I completely disable the dependsOn function in the feature/dependency-graph-dispatch branch, the result is identical. However, with a real-world application, it seems to make a world of difference. So I ended up optimizing the dependsOn function a bit. Would be nice to have a perf test to show for the optimizations, in addition to the real-life app.

Another notable thing is that the performance of scan and toProperty is now actually much better than before, because the overhead introduced by PropertyTransaction is now gone. Instead there's some new overhead related to Bacon.when, and which slows down zipping by about 20%.

@raimohanska
Contributor

Did a bunch of optimizations and updated the results above.

The event-dispatch part of Bacon.when is the place where we could probably squeeze a lot of CPU cycles. There's a lot of looping-through-patterns-and-sources involved right now.

@raimohanska
Contributor

For me, the bottom-line regarding performance is that this is good enough for integration to master. Adding new features is bound to introduce overhead and this overhead (20% cases where combine-methods are not involved) should be acceptable for glitch-free updates. And indeed toProperty and scan are now 50% faster! Even combineTemplate is now faster than in master.

Comments?

@philipnilsson
Member

I'm looking forward to trying this out. I've been too busy to sit down and
go through the code, but will try to figure it out ASAP, though likely not
this week.

Have you written any example programs we can check out where the behaviour
is now different? Is there some kind of precise semantics as to what order
simultaneous events happen? Will it be based on "local" subscription order,
always?

I also don't think we should rush this in too soon. I will try and upgrade
a project of mine to see how it behaves for comments, but of course merge
it if you think it's ready.

On Sun, Nov 3, 2013 at 5:35 PM, Juha Paananen notifications@github.comwrote:

For me, the bottom-line regarding performance is that this is good enough
for integration to master. Adding new features is bound to introduce
overhead and this overhead should be acceptable for glitch-free updates.

Comments?


Reply to this email directly or view it on GitHubhttps://github.com/baconjs/bacon.js/issues/272#issuecomment-27648086
.

@raimohanska
Contributor

The new test cases under "Property update is atomic" are examples where the behavior has changed for the better (they fail in master). Also the new test cases for takeUntil/skipUntil involving "evil twist" are ones that would fail on master. So the new versions are independent of subscription order. I'm also planning to write a blog post on the 0.7.0 improvements soon, including examples. And of course, in the very description of this issue, there's an example for you.

I'm not into rushing either. My plan is also to try the new version with a work project of mine. Maybe we should tag the current version as 0.7.0-rc1?

And as always, I'd really appreciate you reviewing the changed code! I'm sure you'll find something to improve!

@raimohanska
Contributor

I tested the new stuff with a large project and fixed a few problems. Now it seems that bacon.jquery models don't co-operate with the new version.

@raimohanska
Contributor

In dd40a01 I changed the update mechanism to delay updates only if the dependency graph indicates there's a need for that, i.e. there are duplicates in the dependency graph. This fixes the compatibility issue with bacon.jquery. Still, it doesn't bring atomic updates to bacon.jquery Models either.

@philipnilsson
Member

I've just tried updating, also having some issues where I was basically relying on the subscription order to work as it currently does. It has to do with functionality that has "feedback" as in e.g. TodoMVC .Will have to see if I can find some better way or workaround to express what I need.

On the plus side, it seems that the behavior is now more "predictable" than before, in the sense that I can't manage to change it. However it just isn't the behavior I want. :)

@raimohanska
Contributor

Did you use 0.7.0-rc2?

@philipnilsson
Member

Nope, seems like the previous behaviour is back! However, this means that it works again because I'm forcing a subscription early... Will try to figure out what happens exactly in the new branch.

@raimohanska
Contributor

Found a new problem related to Observables created while dispatching: b40b6bd

I haven't debugged this in insulation yet, because it took quite a while to insulate the problem from my application code. Might be related to your problem. Pls fix :)

Maybe we should somehow separate the new observables from the current "transaction"?

@raimohanska
Contributor

Ah, fixed for 0.7.0-rc3

@philipnilsson
Member

Ok had a first really quick look while something was compiling :) The basic idea is that the order of subscribers would be determined by the order they appear if you would read the code from top to bottom (modulo whatever you'd decide in e.g. withDescribe), rather than when some specific onValue/subscribe happens at a later time.

That happens by a subset of dependencies being explicitly registered in advance when something like .flatMap is called. This subset is intuitively described as "the stuff you'd need to pretty print the expression that generated the observable". Correct so far?

@raimohanska
Contributor

Well, yeah, never thought it exactly that way but sounds right...

@raimohanska
Contributor

Another similar issue (this time with Property.delay and some others) fixed in 0.7.0-rc4

@raimohanska
Contributor

There's still some strange performance issue I discovered with a big project. Gotta debug that next week.

@raimohanska
Contributor

Ok, performance issue tamed. There was some crazy extra line that wasted a lot of energy for nothing. Also, I implemented a simple dependency caching mechanism. A serious ALGORITHM might be in order to really sort dependencies FAST in UpdateBarrier.flush. Anyways, it seems fast enough for me as is, for now.

@raimohanska
Contributor

So, in 0.7.0-rc5 performance seems good. I'm almost ready to give this a green light. Will do some more testing with a real application before that.

@raimohanska
Contributor

I seem to be too busy with "real work" :) Still gonna do this though.

@raimohanska
Contributor

In 0.7.0-rc6 I've fixed issues with subscribing while still dispatching. The problem was that out-of-date values were sometimes sent. Now it seems good. Tested with a real-world application too, with all tests passing. Performance almost equal to 0.7.0-rc5.

I'm now satisfied with the glitch-free update implementation.

@raimohanska
Contributor

Implementation exists in branch feature/0.7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment