New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute binding problem with setter #2093

Closed
dylanrtt opened this Issue Nov 23, 2015 · 19 comments

Comments

Projects
None yet
3 participants
@dylanrtt
Contributor

dylanrtt commented Nov 23, 2015

There is an issue where computed/virtual properties may not update when they should if a parent field (from which the compute is derived) has a setter. It appears to be a regression from 2.2.9 to 2.3.0.

In the following example, the virtual computed property concat binds directly to foo and to virtualFoo which is an alias to foo. When foo changes from 'a' to 'b' for instance, concat changes from 'aa'to 'ba'. If it was working correctly, it would also change to 'bb' in a second change but it does not; it retains the incorrect value 'ba' because virtualFoo changing (after concat changes) does not notify concat.

This problem does not happen if the setter is removed from foo.

define: {
    foo: {
        value: 'a',
        set: function(val){ return val; }
    },
    virtualFoo: {
        get: function() {
            return this.attr('foo');
        }
    },
    concat: {
        get: function() {
            return this.attr('foo') + this.attr('virtualFoo');
        }
    }
}
    foo ----.
     |       \
 virtualFoo   \
     \         \
      '----- concat

http://jsbin.com/fosohebeye/edit?html,js,console

While writing this issue, I found that if I bind to virtualFoo first, the problem goes away because the first change to concat has the correct value (because virtualFoo updates before instead of after concat).

In all honesty, this took me more than half a day to narrow down and reproduce. While I admit to not having looked much at the CanJS test suite, I get the feeling that many of the tests barely scratch the surface, and that a lot of the regressions (not necessarily this one) I have encountered could have been prevented by more comprehensive test cases combining features instead of only testing things in their simplest form, which I acknowledge would be difficult and never perfect.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 23, 2015

Contributor

@dylanrtt We have tests for cases extremely similar to this. In fact, these types of race conditions were a primary focus of 2.2. CanJS has over 1700 tests with almost 5000 assertions.

Combining the features in tests is not the solution as:

  • this error can probably be reproduced with just computes.
  • tests that combine features are extremely difficult to debug when regressions happen.
  • the number of combinations of technologies are infinite. It would be impossible to track which cases are not tested and fill in those gaps. Unit tests closer to the code being used are much more easy to identify which api's or cases are not being hit.
Contributor

justinbmeyer commented Nov 23, 2015

@dylanrtt We have tests for cases extremely similar to this. In fact, these types of race conditions were a primary focus of 2.2. CanJS has over 1700 tests with almost 5000 assertions.

Combining the features in tests is not the solution as:

  • this error can probably be reproduced with just computes.
  • tests that combine features are extremely difficult to debug when regressions happen.
  • the number of combinations of technologies are infinite. It would be impossible to track which cases are not tested and fill in those gaps. Unit tests closer to the code being used are much more easy to identify which api's or cases are not being hit.

@justinbmeyer justinbmeyer self-assigned this Nov 23, 2015

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 23, 2015

Contributor

note: this is probably a regression from changing binding from breadth-first-search to depth-first-search in 2.3 to improve compute performance times.

Contributor

justinbmeyer commented Nov 23, 2015

note: this is probably a regression from changing binding from breadth-first-search to depth-first-search in 2.3 to improve compute performance times.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 23, 2015

Contributor

The problem can bee seen with just computes:

In 2.2.9, change is called just once:

http://justinbmeyer.jsbin.com/ruqufe/1/edit?html,js,console

In 2.3.2, change is called twice:

http://justinbmeyer.jsbin.com/ruqufe/2/edit?html,js,console

Contributor

justinbmeyer commented Nov 23, 2015

The problem can bee seen with just computes:

In 2.2.9, change is called just once:

http://justinbmeyer.jsbin.com/ruqufe/1/edit?html,js,console

In 2.3.2, change is called twice:

http://justinbmeyer.jsbin.com/ruqufe/2/edit?html,js,console

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 23, 2015

Contributor

@dylanrtt I think the answer to the problems are:

  1. Keep building up our unit test coverage to include more and more cases.
  2. Respond to issues like these quicker. (I'll try to get this in this week).
  3. Make it clear when a release is approximately "production" ready.

For #3, I'm not sure the best way of doing this because our goal is to be able to make a release whenever our test suite passes and APIs are stable. I want to avoid an endless and meaningless string of .pre and .beta releases. I suppose the official release article comes out when our minor releases are pretty stable.

Contributor

justinbmeyer commented Nov 23, 2015

@dylanrtt I think the answer to the problems are:

  1. Keep building up our unit test coverage to include more and more cases.
  2. Respond to issues like these quicker. (I'll try to get this in this week).
  3. Make it clear when a release is approximately "production" ready.

For #3, I'm not sure the best way of doing this because our goal is to be able to make a release whenever our test suite passes and APIs are stable. I want to avoid an endless and meaningless string of .pre and .beta releases. I suppose the official release article comes out when our minor releases are pretty stable.

@dylanrtt

This comment has been minimized.

Show comment
Hide comment
@dylanrtt

dylanrtt Nov 23, 2015

Contributor

@justinbmeyer Sure, adding more complicated tests isn't fun and is probably the least efficient way to improve test coverage. I just thought I would throw that out there as an idea because I have had too many problems like this in the past and something needs to change.

It's been at the point for a while where I expect there to be nasty regressions in core features like this every time I upgrade, because frankly, this seems to always happen. Having a stable framework is absolutely key to building any complex app.

Contributor

dylanrtt commented Nov 23, 2015

@justinbmeyer Sure, adding more complicated tests isn't fun and is probably the least efficient way to improve test coverage. I just thought I would throw that out there as an idea because I have had too many problems like this in the past and something needs to change.

It's been at the point for a while where I expect there to be nasty regressions in core features like this every time I upgrade, because frankly, this seems to always happen. Having a stable framework is absolutely key to building any complex app.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 23, 2015

Contributor

@dylanrtt I would encourage you to only upgrade once a Minor release has got past the 3rd patch release. It will probably continue to happen if you upgrade immediately to a new minor release.

There's just not a good path around this considering resources and time to delivery.

Contributor

justinbmeyer commented Nov 23, 2015

@dylanrtt I would encourage you to only upgrade once a Minor release has got past the 3rd patch release. It will probably continue to happen if you upgrade immediately to a new minor release.

There's just not a good path around this considering resources and time to delivery.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 23, 2015

Contributor

There's just not a good path around this considering resources and time to delivery.

Meaning ... if the ultimate goal is to get new features and the performance features within a minor release into your code base as soon as possible, we have to get out the "regression filled" minor release as soon as possible to find these bugs.

Proactive "random" testing isn't going to discover these faster than getting them out, people encountering them, getting good tests added, and fixes and patches back out.

If you don't want to be part of that process, I don't blame you, but I think we need it. You can avoid it by waiting for a few patch releases before upgrading.

Contributor

justinbmeyer commented Nov 23, 2015

There's just not a good path around this considering resources and time to delivery.

Meaning ... if the ultimate goal is to get new features and the performance features within a minor release into your code base as soon as possible, we have to get out the "regression filled" minor release as soon as possible to find these bugs.

Proactive "random" testing isn't going to discover these faster than getting them out, people encountering them, getting good tests added, and fixes and patches back out.

If you don't want to be part of that process, I don't blame you, but I think we need it. You can avoid it by waiting for a few patch releases before upgrading.

@dylanrtt

This comment has been minimized.

Show comment
Hide comment
@dylanrtt

dylanrtt Nov 23, 2015

Contributor

It's true that pushing it out the new version to a bunch of projects does reveal regressions quickly. I remember hearing about Angular having a system where they automatically upgrade hundreds of apps internally at Google and run all of their tests with the new framework code in order to find bugs before release.

I don't know if you guys are doing anything like that already but if you could adopt something like it, it might turn out to be worthwhile. For example, all Bitovians currently on projects could upgrade and test at various pre-release stages.

Contributor

dylanrtt commented Nov 23, 2015

It's true that pushing it out the new version to a bunch of projects does reveal regressions quickly. I remember hearing about Angular having a system where they automatically upgrade hundreds of apps internally at Google and run all of their tests with the new framework code in order to find bugs before release.

I don't know if you guys are doing anything like that already but if you could adopt something like it, it might turn out to be worthwhile. For example, all Bitovians currently on projects could upgrade and test at various pre-release stages.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 23, 2015

Contributor

@dylanrtt We do upgrade our client apps. I think we code in probably a more uniform way and are less likely to encounter bugs the community finds.

Btw, if you would like to have a voice in our priorities, please join us on our Wednesday meetings.

Finally, I'm about to submit a fix. I was happy that it didn't effect our hard earned compute performance gains. Please check it out and let me know if it fixes your overall problem.

Contributor

justinbmeyer commented Nov 23, 2015

@dylanrtt We do upgrade our client apps. I think we code in probably a more uniform way and are less likely to encounter bugs the community finds.

Btw, if you would like to have a voice in our priorities, please join us on our Wednesday meetings.

Finally, I'm about to submit a fix. I was happy that it didn't effect our hard earned compute performance gains. Please check it out and let me know if it fixes your overall problem.

@justinbmeyer justinbmeyer added the bug label Nov 23, 2015

@justinbmeyer justinbmeyer added this to the 2.3.3 milestone Nov 23, 2015

@dylanrtt

This comment has been minimized.

Show comment
Hide comment
@dylanrtt

dylanrtt Nov 24, 2015

Contributor

I just tested this out and it fixes the use case I provided in the first post, and the ordering issue you demonstrated with computes, but it seems to happen again when I add more depth to the 'computes'.

var Map = can.Map.extend({
    define: {
        root: {
            value: 'a',
            set: function(val){ return val; }
        },
        child: {
            get: function() {
                return this.attr('root');
            }
        },
        grandChild: {
            get: function() {
                return this.attr('child');
            }
        },
        combine: {
            get: function() {
                return this.attr('root') + this.attr('grandChild');
            }
        }
    }
});

var map = new Map();

map.bind('combine', function(ev, newVal, oldVal) {
    console.log('newVal=' + newVal);
});

map.attr('root', 'b');

The problem still goes away when the setter is removed from root and I'm not sure how that plays a role (because it creates a compute?). I could not reproduce this problem with just computes like you did earlier.

Upon further review, I think you and I might be talking about different problems. It seems you think the problem is that change fires twice in 2.3.2 and once in 2.2.9, but what I initially described seeing was 2 changes in 2.2.9, which while sounding like something I would normally complain about, was not yet my concern... that also no longer seems to be happening in my 2.2.9 example so I am now confused...

Am I correct in understanding that the goal is to minimize change events while infinitely nesting compute dependencies?

Contributor

dylanrtt commented Nov 24, 2015

I just tested this out and it fixes the use case I provided in the first post, and the ordering issue you demonstrated with computes, but it seems to happen again when I add more depth to the 'computes'.

var Map = can.Map.extend({
    define: {
        root: {
            value: 'a',
            set: function(val){ return val; }
        },
        child: {
            get: function() {
                return this.attr('root');
            }
        },
        grandChild: {
            get: function() {
                return this.attr('child');
            }
        },
        combine: {
            get: function() {
                return this.attr('root') + this.attr('grandChild');
            }
        }
    }
});

var map = new Map();

map.bind('combine', function(ev, newVal, oldVal) {
    console.log('newVal=' + newVal);
});

map.attr('root', 'b');

The problem still goes away when the setter is removed from root and I'm not sure how that plays a role (because it creates a compute?). I could not reproduce this problem with just computes like you did earlier.

Upon further review, I think you and I might be talking about different problems. It seems you think the problem is that change fires twice in 2.3.2 and once in 2.2.9, but what I initially described seeing was 2 changes in 2.2.9, which while sounding like something I would normally complain about, was not yet my concern... that also no longer seems to be happening in my 2.2.9 example so I am now confused...

Am I correct in understanding that the goal is to minimize change events while infinitely nesting compute dependencies?

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 24, 2015

Contributor

Yes, it's a problem that we worked on solving in 2.2. It's a tricky problem, but we came up with a well reasoned solution.

My guess now is that batching isn't being propagated through events correctly. To make nested computes work, batching and event ordering needs to be right. My guess is that set is getting in the way of that. I'll check it out.

Sent from my iPhone

On Nov 23, 2015, at 6:55 PM, dylanrtt notifications@github.com wrote:

I just tested this out and it fixes the use case I provided in the first post, and the ordering issue you demonstrated with computes, but it seems to happen again when I add more depth to the 'computes'.

console.clear();

var Map = can.Map.extend({
define: {
root: {
value: 'a',
set: function(val){ return val; }
},
child: {
get: function() {
return this.attr('root');
}
},
grandChild: {
get: function() {
return this.attr('child');
}
},
combine: {
get: function() {
return this.attr('root') + this.attr('grandChild');
}
}
}
});

var map = new Map();

map.bind('combine', function(ev, newVal, oldVal) {
console.log('newVal: ' + newVal);
});

map.attr('root', 'b');
The problem still goes away when the setter is removed from root and I'm not sure how that plays a role.

I could not reproduce this problem with just computes like you did earlier.

Upon further review, I think you and I might be talking about different problems. It seems you think the problem is that change fires twice in 2.3.2 and once in 2.2.9, but what I initially described seeing was 2 changes in 2.2.9, which while sounding like something I would normally complain about, was not yet my concern; that also no longer seems to be happening in my 2.2.9 example so I am now confused...

Am I correct in understanding that the goal is to minimize change events while infinitely nesting compute dependencies? Sounds like a tricky problem.


Reply to this email directly or view it on GitHub.

Contributor

justinbmeyer commented Nov 24, 2015

Yes, it's a problem that we worked on solving in 2.2. It's a tricky problem, but we came up with a well reasoned solution.

My guess now is that batching isn't being propagated through events correctly. To make nested computes work, batching and event ordering needs to be right. My guess is that set is getting in the way of that. I'll check it out.

Sent from my iPhone

On Nov 23, 2015, at 6:55 PM, dylanrtt notifications@github.com wrote:

I just tested this out and it fixes the use case I provided in the first post, and the ordering issue you demonstrated with computes, but it seems to happen again when I add more depth to the 'computes'.

console.clear();

var Map = can.Map.extend({
define: {
root: {
value: 'a',
set: function(val){ return val; }
},
child: {
get: function() {
return this.attr('root');
}
},
grandChild: {
get: function() {
return this.attr('child');
}
},
combine: {
get: function() {
return this.attr('root') + this.attr('grandChild');
}
}
}
});

var map = new Map();

map.bind('combine', function(ev, newVal, oldVal) {
console.log('newVal: ' + newVal);
});

map.attr('root', 'b');
The problem still goes away when the setter is removed from root and I'm not sure how that plays a role.

I could not reproduce this problem with just computes like you did earlier.

Upon further review, I think you and I might be talking about different problems. It seems you think the problem is that change fires twice in 2.3.2 and once in 2.2.9, but what I initially described seeing was 2 changes in 2.2.9, which while sounding like something I would normally complain about, was not yet my concern; that also no longer seems to be happening in my 2.2.9 example so I am now confused...

Am I correct in understanding that the goal is to minimize change events while infinitely nesting compute dependencies? Sounds like a tricky problem.


Reply to this email directly or view it on GitHub.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 24, 2015

Contributor

Tracking it down a bit more. Setters create's its own batch. Not sure why that affects things. I'm going to keep digging.

Contributor

justinbmeyer commented Nov 24, 2015

Tracking it down a bit more. Setters create's its own batch. Not sure why that affects things. I'm going to keep digging.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 24, 2015

Contributor

I can break it with just computes and a batch.

var root = can.compute('a'),
    childB = can.compute(function() {
        return root();
    }),
    grandChild = can.compute(function(){
        return childB();
    }),
    combine = can.compute(function() {
        return root() + grandChild();
    });

combine.bind("change", function(ev, newVal) {
    equal(newVal, "bb", "concat changed");
});
can.batch.start();
root('b');
can.batch.stop();
Contributor

justinbmeyer commented Nov 24, 2015

I can break it with just computes and a batch.

var root = can.compute('a'),
    childB = can.compute(function() {
        return root();
    }),
    grandChild = can.compute(function(){
        return childB();
    }),
    combine = can.compute(function() {
        return root() + grandChild();
    });

combine.bind("change", function(ev, newVal) {
    equal(newVal, "bb", "concat changed");
});
can.batch.start();
root('b');
can.batch.stop();
@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 24, 2015

Contributor

So this breaks in 2.2.9 as well: http://justinbmeyer.jsbin.com/ciwebu/2/edit?html,js,console

I think I know what the problem and fix is ... but haven't seen what other problems the fix will cause.

Contributor

justinbmeyer commented Nov 24, 2015

So this breaks in 2.2.9 as well: http://justinbmeyer.jsbin.com/ciwebu/2/edit?html,js,console

I think I know what the problem and fix is ... but haven't seen what other problems the fix will cause.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 25, 2015

Contributor

Ok, so I fixed this, but it introduces about a 25% increase in the spinning circle demo. This is due to 2x more bindings and 2x more events.

The solution I came up with is that when "root" computes (or maps) change in a batch, all computes that might depend on those changes are notified. For example, if I have computes like:

var root = can.compute('a'),
    child1 = can.compute(function() {
        return root();
    }),
    child2 = can.compute(function() {
        return root();
    }),
    grandChild1 = can.compute(function(){
        return child1();
    }),
    res = can.compute(function() {
        return child2() + grandChild();
    });

It might look like:

presentation3

When root changes, it sends out a "change#notify" event which propagates to every compute. It increases a counter that tracks how many dependency "change" events it needs before it can re-calculate its value:

presentation3

Once we have that, we start firing "change" events in breadth-first-search. But a compute only re-evaluates once it receives the right number of "change" events.

Contributor

justinbmeyer commented Nov 25, 2015

Ok, so I fixed this, but it introduces about a 25% increase in the spinning circle demo. This is due to 2x more bindings and 2x more events.

The solution I came up with is that when "root" computes (or maps) change in a batch, all computes that might depend on those changes are notified. For example, if I have computes like:

var root = can.compute('a'),
    child1 = can.compute(function() {
        return root();
    }),
    child2 = can.compute(function() {
        return root();
    }),
    grandChild1 = can.compute(function(){
        return child1();
    }),
    res = can.compute(function() {
        return child2() + grandChild();
    });

It might look like:

presentation3

When root changes, it sends out a "change#notify" event which propagates to every compute. It increases a counter that tracks how many dependency "change" events it needs before it can re-calculate its value:

presentation3

Once we have that, we start firing "change" events in breadth-first-search. But a compute only re-evaluates once it receives the right number of "change" events.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 25, 2015

Contributor

I really want to figure out a way to make this work faster than it currently does. Making the extra bindings and producing extra events is what's slowing things down. However, this is a nice approach because it uses existing APIs.

Another alternative would be for computes to add a reference to themselves on their source observables. For example, c2 could add itself to root.__computedBy[c2._cid] = c2. Then, the process of having root inform c2 and all of its other dependent computes would be much faster.

However, this means that any dependent observable will potentially get a __computedBy object.

So, I like the purity of my solution, but not the performance. Maybe there's a way to have both.

Contributor

justinbmeyer commented Nov 25, 2015

I really want to figure out a way to make this work faster than it currently does. Making the extra bindings and producing extra events is what's slowing things down. However, this is a nice approach because it uses existing APIs.

Another alternative would be for computes to add a reference to themselves on their source observables. For example, c2 could add itself to root.__computedBy[c2._cid] = c2. Then, the process of having root inform c2 and all of its other dependent computes would be much faster.

However, this means that any dependent observable will potentially get a __computedBy object.

So, I like the purity of my solution, but not the performance. Maybe there's a way to have both.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 25, 2015

Contributor

It's probably possible to avoid the __computedBy directly on the object because we can use a store for this.

computedBy[root._cid][c2._cid] = c2;

can/util/batch would have to know about this structure. But that's probably alright.

Contributor

justinbmeyer commented Nov 25, 2015

It's probably possible to avoid the __computedBy directly on the object because we can use a store for this.

computedBy[root._cid][c2._cid] = c2;

can/util/batch would have to know about this structure. But that's probably alright.

@justinbmeyer

This comment has been minimized.

Show comment
Hide comment
@justinbmeyer

justinbmeyer Nov 26, 2015

Contributor

@daffl @dylanrtt After implementing the above, there's about a 20% slow down. Still not happy.

I thought about it some more and realized that what really matters is "depth". I'm thinking that it might be faster to calculate depth of every compute, and re-evaluate computes by order of their depth. This will mean we don't have to span the graph twice. Hopefully keeping performance approximately just as fast as it was before.

Contributor

justinbmeyer commented Nov 26, 2015

@daffl @dylanrtt After implementing the above, there's about a 20% slow down. Still not happy.

I thought about it some more and realized that what really matters is "depth". I'm thinking that it might be faster to calculate depth of every compute, and re-evaluate computes by order of their depth. This will mean we don't have to span the graph twice. Hopefully keeping performance approximately just as fast as it was before.

@daffl

This comment has been minimized.

Show comment
Hide comment
@daffl

daffl Nov 30, 2015

Contributor

Closed via #2094

Contributor

daffl commented Nov 30, 2015

Closed via #2094

@daffl daffl closed this Nov 30, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment