Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lazy subModelSeq (and subModel)? #143

Closed
cmeeren opened this issue Oct 4, 2019 · 56 comments
Closed

Lazy subModelSeq (and subModel)? #143

cmeeren opened this issue Oct 4, 2019 · 56 comments

Comments

@cmeeren
Copy link
Member

cmeeren commented Oct 4, 2019

Unless I'm mistaken, currently subModelSeq will loop through and update the sub-bindings for every item on every iteration of the update loop. This may have performance implications for large sequences. If the input sequence has not changed, the bindings should ideally be skipped altogether. This calls for a subModelSeqLazy binding.

The same might be said for subModel, though there's only ever one model there, so it's not as critical. But given that the model might have further sub-models (or just a lot of bindings), it should be possible to skip subModel, too, if the input hasn't changed.

Nobody has requested this, and I don't have any immediate plans on implementing it, but I'm leaving this issue here in case anyone else wants to look at it, or just as a future reminder for myself.

@TysonMN
Copy link
Member

TysonMN commented Oct 18, 2019

Yep! I want this now.

Unless I'm mistaken, currently subModelSeq will loop through and update the sub-bindings for every item on every iteration of the update loop. This may have performance implications for large sequences.

Yep, you are correct. We noticed the performance of our application decrease significantly after I added the SubModelSeq binding from #131 (comment) even though it only contained around 200 sub-models. The CPU analyzer in Visual Studio showed that the binding functions were taking a significant percentage of the time. Upgrading to a version with the SubModelSeq optimizations from #149 regained about half of the lost performance while removing all sub-models or removing the entire SubModelSeq binding regained all of the lost performance.

Just like how we have Binding.oneWaySeq as the special case of Binding.oneWaySeqLazy with equals = refEq and map = id (as of #157), I think we should also use refEq and id as the defaults in these new lazy bindings for SubModel[Seq].

Quoting #157 (comment)

I'm not sure whether this should bump the patch (implementation detail), minor (new feature), or major (breaking change) version. I'm leaning towards the latter, but am open to other opinions.

Again, I am indifferent to which version bump you go with. However, I recommend that we implement these lazy bindings while using refEq and id as the defaults and release them in the same version bump since it is the same kind of change. In particular, then you only have to do that version bump once.

@cmeeren
Copy link
Member Author

cmeeren commented Oct 18, 2019

I agree with everything. Thanks! 👍

I suggest we wait with merging #158 until this is ready, too. Alternatively, if it makes the job easier for you, we can create a dedicated branch in this repo for these changes, and merge toward that instead, and when it's ready, merge that into master.

@TysonMN
Copy link
Member

TysonMN commented Oct 21, 2019

I suggest we wait with merging #158 until this is ready, too. Alternatively, if it makes the job easier for you, we can create a dedicated branch in this repo for these changes, and merge toward that instead, and when it's ready, merge that into master.

Yes, I like the idea of having a dedicated branch. I used my new write access to create a branch for that purpose.

@cmeeren
Copy link
Member Author

cmeeren commented Oct 24, 2019

I think I have come up with a good way to implement this.

Originally I thought that we would need new subModelLazy, subModelOptLazy, subModelWinLazy, and subModelSeqLazy, all with map and equals parameters, like oneWayLazy. For oneWay, it makes sense to be able to memoize potentially expensive computations, but thinking of it, I can't really come up with any good use-case for why one would need this for the sub-model bindings. Hence I think that map isn't really needed. (I also think the sub-model overloads are complicated enough type-wise as it is, and map would add yet another type/transformation into the mix. The most complex overloads actually already have a similar transformation in toBindingModel.)

Therefore, in my mind, the simplest solution is to extend all the existing sub-model bindings with an additional, optional equals parameter (added as the final parameter for signature-wise backwards compatibility):

  • For subModel and subModelOpt it has signature 'subModel -> 'subModel -> bool and defaults to refEq. Internally it's wrapped in a function that also handles the option wrapper.
  • Same for subModelWin, but it's wrapped in a function that handles the WindowState wrapper
  • For subModelSeq it has signature #seq<'subModel> -> #seq<'subModel> -> bool and defaults to refEq. (I'm not sure if the type can be annotated with #seq<'subModel> since this is actually short-hand for something like 'a with the type constraint when 'a :> seq<'subModel>, so we may need to use the explicit notation, which I hope we can avoid since it's harder to understand in Intellisense. We'll see what works.)

What do you think?

@TysonMN
Copy link
Member

TysonMN commented Oct 24, 2019

I agree with the subModel and subModelWin cases.

For subModelSeq, I would like to see equals and map parameters.

I had forgotten about the present issue when I created #157. Even if I had remembered this issue, I may still have created that one since oneWaySeq is simpler than subModelSeq.

My application at work has a subModelSeq binding in which the getSubModels parameter takes the form

fun m -> m.Foo |> Seq.filter predicate

WIthout a map parameter, the most efficient implementation of equals would be a linear-time computation to call refEq on sequential pairs of elements.

To be fair, adding an optional equals parameter for subModel will be a huge improvement and mostly mitigate the lack of map for subModelSeq. It its current form, my application's subModelSeq binding is fully inspected for changes (a quadratic-time computation c.f. #137) every time Elmish calls update. The performance goes from perfect when there are 0 sub-models to essentially unusable when there are ~300 sub-models. After we add equals for subModel and I use it in my application, then the performance will once again be perfect.

Typing out the previous paragraph made me realize that a subModel binding can be a complicated way to express the parameter map. After we add equals for subModel, then it would be possible to use this hack to implement lazy behavior for subModelSeq or any other type of binding.

runs away to code
...
returns from coding

As a proof of concept,

"CounterValue" |> Binding.oneWay (fun m -> m.Count)

and

<TextBlock Text="{Binding CounterValue, StringFormat='Counter value: {0}'}" Width="110" Margin="0,5,10,5" />

become

"DataContext" |> Binding.subModel((fun m -> m.Count), id, (fun () -> [
  "CounterValue" |> Binding.oneWay snd
]))

and

<StackPanel DataContext="{Binding DataContext}">
  <TextBlock Text="{Binding CounterValue, StringFormat='Counter value: {0}'}" Width="110" Margin="0,5,10,5" />
</StackPanel>

How about we make this hack a first-class binding? In this world, there would be no oneWayLazy...just oneWay. Replacing a oneWayLazy binding for the form

"name" |> Binding.oneWayLazy(get, equals, map)

would look something like

"name" |> Binding.lazy(get, equals) |> Binding.oneWay(map)

This makes me think about the functor/applicative/monadic-like behaviors of lazy, seq, and opt. Maybe we can express these "effects" better thereby making it easier to create the desired binding by making it possible to compose bindings: like binding combinators.

Ah! I want to think more about this now, but I have to go :'(

@cmeeren
Copy link
Member Author

cmeeren commented Oct 25, 2019

[...] How about we make this hack a first-class binding? [...]

Fantastic! I tried a long time ago to come up with a single lazy abstraction, but couldn't do it. If what you suggest will work, and no XAML changes are needed (i.e., no binding to a separate DataContext on a parent element), then I'm thrilled by this and it should definitely be in Elmish.WPF.

If the binding API can be designed to be more combinatorial in this way, I think it will end up more ergonomic.

Experiment away!

@TysonMN
Copy link
Member

TysonMN commented Oct 25, 2019

How about we make this hack a first-class binding?

I tried a long time ago to come up with a single lazy abstraction, but couldn't do it.

I think #151 is blocking this work.

The "conceptual call stack" for my hack above using subModel looks like

  • ViewModel<,>.UpdateModel for subModel
  • updateValue for subModel
  • branching on VmBinding<,>
  • ViewModel<,>.UpdateModel for oneWay
  • updateValue for oneWay
  • branching on VmBinding<,>

...and no XAML changes are needed (i.e., no binding to a separate DataContext on a parent element)...

The reason my hack needs a separate DataContext binding on a parent element is because that conceptual call stack includes two calls to ViewModel<,>.UpdateModel; there are two different view models involved. After #151 is complete, there will be a new record type called (say) VmBindingStrategy<...> with field UpdateValue and that conceptual call stack would become something like

  • ViewModel<,>.UpdateModel for Lazy<subModel>
  • VmBindingStrategy<...>.UpdateValue for Lazy<subModel>
  • VmBindingStrategy<...>.UpdateValue for subModel

@TysonMN
Copy link
Member

TysonMN commented Nov 2, 2019

I think #151 is blocking this work.

I don't think that any more. I think I can get the needed recursion by calling a function that takes the VmBinding<,>. For example, in the case of UpdateModel, the recursion can be added just to updateValue.

I expect to make progress on this later today.

@TysonMN
Copy link
Member

TysonMN commented Nov 2, 2019

I was thinking that the lazy binding would be created in part by calling a method Binding.lazy, but that somewhat conflicts with lazy being a keyword in F#.

Would anyone like to suggest a different or better name?

Maybe optimized?

@cmeeren
Copy link
Member Author

cmeeren commented Nov 2, 2019

IMHO optimized is too vague.

  • asLazy
  • lzy (preferably not; won't show up when typing la)
  • lazy' (preferably not)
  • ``lazy`` (definitely not)

Further suggestions welcome.

@TysonMN
Copy link
Member

TysonMN commented Nov 4, 2019

I also like asLazy best among your suggestions. However, after writing #178 (comment), I now better understand why I initially suggested optimized.

We have two types of "lazy":

  1. the short-circuit evaluation of bindings like OneWayLazy and
  2. the call-by-name evaluation of bindings like SubModelSelectedItem.

I think the behavior of the second one is more like (but not the exactly the same as) the .NET/C# type Lazy<> and the F# type/keyword lazy. (The behavior of those types is exactly the same as the call-by-need evaluation strategy).

I feel like we have ample room for naming improvements here. I will keep thinking about this.

@cmeeren
Copy link
Member Author

cmeeren commented Nov 4, 2019

I feel like we have ample room for naming improvements here. I will keep thinking about this.

Do that, though the term lazy is well-known in the MVU world, so I'd prefer a variant of that. (I'm open to other suggestions, though.)

@TysonMN
Copy link
Member

TysonMN commented Nov 4, 2019

Oh, I just realized something that I had wrong in my head.

OneWayLazy is "lasy" in two ways.

type internal OneWayLazyBinding<'model, 'a, 'b> = {
Get: 'model -> 'a
Equals: 'a -> 'a -> bool
Map: 'a -> 'b
CurrentVal: Lazy<'b> ref
}

  1. It is lazy via its use of Equals (and Map) in updateValue.
    | OneWayLazy b ->
    if b.Equals (b.Get newModel) (b.Get currentModel) then false
    else
    b.CurrentVal := lazy (newModel |> b.Get |> b.Map)
    true
  2. It is also lazy via its use of Lazy<>, which is created in initializeBinding
    | OneWayLazyData d ->
    let get = measure name "get" d.Get
    let map = measure name "map" d.Map
    Some <| OneWayLazy {
    Get = get
    Map = map
    Equals = measure2 name "equals" d.Equals
    CurrentVal = ref <| lazy (initialModel |> get |> map) }

    and updateValue
    | OneWayLazy b ->
    if b.Equals (b.Get newModel) (b.Get currentModel) then false
    else
    b.CurrentVal := lazy (newModel |> b.Get |> b.Map)
    true

    and used in tryGetMember
    | OneWayLazy { CurrentVal = value } ->
    (!value).Value

I also think I am wrong as to why "Lazy" appears in the name of OneWayLazy. Comparing them side by side...

type internal OneWayBinding<'model, 'a> = {
Get: 'model -> 'a
}
type internal OneWayLazyBinding<'model, 'a, 'b> = {
Get: 'model -> 'a
Equals: 'a -> 'a -> bool
Map: 'a -> 'b
CurrentVal: Lazy<'b> ref
}

...I thought that OneWayLazy had "Lazy" in its name because of the addition of Equals and Map. However, I now have the feeling that this is because of the use of Lazy<>.

@cmeeren, can you clarify this for me? Is it for only one reason or the other that OneWayLazy is called "Lazy"?

@cmeeren
Copy link
Member Author

cmeeren commented Nov 5, 2019

It's called "lazy" because of the corresponding optimization in Elm (and in Elmish). Note that there are of course some differences due to Elmish.WPF using static views, but the primary goal and the end result is similar: Skip doing UI work if the inputs haven't changed.

@TysonMN
Copy link
Member

TysonMN commented Nov 5, 2019

Thanks for those links. They are very helpful :)

...the primary goal and the end result is similar: Skip doing UI work if the inputs haven't changed.

But what is UI work? Here are three possibilities phrased in terms of Elm.

  1. Elm computation involving the DOM
  2. Elm computation involving the virtual DOM
  3. User computation used to create the virtual DOM

I associate the phrase "UI work" work the first possibility. Elm avoids DOM computation by doing more virtual DOM computation. Elmish.WPF also strives for the related goal of minimizing change notifications. In the case of SubModelSelectedItem, it used to be that updateValue always caused a change notification (because it always returned true). PR #177 fixed that. I view this as a more accurate diffing algorithm.

The second possibility is mostly about the virtual DOM diffing algorithm. Being able to compute the diff more efficiently is the purpose of Elm's Html.Lazy. For OneWayLazy, this corresponds to the Equals and Map functions. This is what I originally thought. I view this as a more efficient diffing algorithm.

I don't know if Elm has any mechanism for making the third possibility more efficient. In Elmish.WPF for the case of OneWayLazy, the use of Lazy<> is to help avoid unnecessary execution of the user provided functions (Get and Map). However, I still don't know under what circumstances this situations would arise.

More specifically:
Is it possible for WPF to be given a binding but not read the bound value?

This is related to #176 (comment)

...if the UI doesn't need the selected item for some reason, it's not necessary to calculate it.

...which was also quoted in #178 (comment).

I view the third possibility as lacking an analog in Elm or Elmish-react and maybe not even necessary in Elmish.WPF. If WPF always requests a bound value, then removing the use of Lazy<> would yield a slight increase in performance and memory use. It would also simplify the code within Elmish.WPF (only internal code...there would be no change to the public API), but the current use of Lazy<> is not a hindrance to the internal code either.

@TysonMN
Copy link
Member

TysonMN commented Nov 5, 2019

Quoting Elm's HTML.lazy:

Note: When are two values “the same” though? To optimize for performance, we use JavaScript’s === operator behind the scenes:

  • Structural equality is used for Int, Float, String, Char, and Bool.
  • Reference equality is used for records, lists, custom types, dictionaries, etc.

Structural equality means that 4 is the same as 4 no matter how you produced those values.
Reference equality means the actual pointer in memory has to be the same. Using reference equality is always cheap O(1), even when the data structure has thousands or millions of entries. So this is mostly about making sure that using lazy will never slow your code down a bunch by accident. All the checks are super cheap!
...
...in the end it only matters how much you successfully use lazy.

Then why not always do such reference equality checks? Why make the user put lazy everywhere in their code? Is it possible to use lazy in an unsuccessful way?

This is the change discussed in #157 for OneWaySeq[Lazy] (to have equal = refEq and map = id by default) and merged to an internal branch in PR #162.

I think it is a good idea use reference equality checks by default. They cost almost nothing and could save loads of computation.

To me, successfully using lazy means overriding the default behavior with some other kind of efficient equality check when reference equality would always return false (or, God forbid, reference equality would always return true because you are using mutable types). This is somewhat like the difference between lazyView and lazyViewWith in Elmish-react.

@cmeeren
Copy link
Member Author

cmeeren commented Nov 5, 2019

  1. User computation used to create the virtual DOM

[...]

I don't know if Elm has any mechanism for making the third possibility more efficient.

I'm not sure exactly what you mean by point 3, but if I understand you correctly, that would just be some variant of plain old memoization implemented and controlled by the user.

In Elmish.WPF for the case of OneWayLazy, the use of Lazy<> is to help avoid unnecessary execution of the user provided functions (Get and Map). However, I still don't know under what circumstances this situations would arise.

More specifically:
Is it possible for WPF to be given a binding but not read the bound value?

You seem to assume that each binding is used exactly once. That's not the case. You can have several XAML bindings to a single VM property (i.e., Elmish.WPF binding). Using lazy ensures the value is only computed once.

@cmeeren
Copy link
Member Author

cmeeren commented Nov 5, 2019

Quoting Elm's HTML.lazy:

[snip rest of comment]

I'm unsure if you want an answer (and if so, what the question is), or if you're simply writing down your reflections. Let me know if you have concrete suggestions for a change to a binding.

@TysonMN
Copy link
Member

TysonMN commented Nov 5, 2019

  1. User computation used to create the virtual DOM

I'm not sure exactly what you mean by point 3, but if I understand you correctly, that would just be some variant of plain old memoization implemented and controlled by the user.

I agree that point three by itself is rather unclear. This is partially because I tried to phrase things in terms of Elm, but I don't know what is possible in Elm for this case as it exists in my mind.

In terms of Elmish.WPF, I am specifically referring to the use of Lazy<> in the bindings. I don't think it has anything to do with the user and what is under their control. For example, if the user wants to (must?) use a SubModelSelectedItem binding, then they have no choice but to also use Lazy<> (now that PR #179 is complete). They can't directly see the use of Lazy<>, but their program is using it.

You seem to assume that each binding is used exactly once. That's not the case. You can have several XAML bindings to a single VM property (i.e., Elmish.WPF binding). Using lazy ensures the value is only computed once.

Sure, but computing things more than once is not the issue. Both...

let a = f ()
printf "%A" a
printf "%A" a

...and...

let a = lazy { f () }
printf "%A" a.Value
printf "%A" a.Value

...execute f exactly once. The only reason that I can see for why Elmish.WPF is using Lazy<> is because maybe Lazy<>.Value will never be called by WPF for some binding over the course of its lifetime. But I can't think of any reason why this would happen.

Of course WPF won't request the bound value for a binding it doesn't have, so any binding defined in Elmish.WPF that doesn't appear in any XAML sort of fits this description, but this is a pathological case. I think this case is better described as a bug. Either the user should use this binding in some XAML or the binding should be deleted.

Another consequence of using Lazy<> is that the execution of the memoized computation happens later in time than it would if Lazy<> were not being used. However, this difference seems pointless to me. In particular, both cases execute the computation on the UI thread.

Oh, this gives me an idea though. It used to be that Elmish dispatch loop (and especially update) was (or, at least, could be) executed on a background thread. Then Elmish.WPF users noticed the race condition when typing in a text box. The fix was to put the whole dispatch loop on the UI thread. Certainly that was sufficient. It probably wasn't completely necessary though. Some parts of the dispatch loop (and especially parts of update) can probably execute on a background thread. Probably not worth it though. I will just keep this idea in the back of my mind (and here!).

I'm unsure if you want an answer (and if so, what the question is), or if you're simply writing down
your reflections.

It is an honest question, but it is also fine if it is taken as being rhetorical.

Let me know if you have concrete suggestions for a change to a binding.

Will do.

@TysonMN
Copy link
Member

TysonMN commented Nov 6, 2019

Actually, don't bother replying my previous comment. I (think I) have an idea that will make it moot. I will create a new issue soon.

@cmeeren
Copy link
Member Author

cmeeren commented Nov 10, 2019

I have thought some more about this. It seems we are doing three optimizations:

  • Memoization: During update, avoid expensive work to compute the value if the inputs haven't changed
  • Caching: During TryGetMember, avoid re-computing an already computed value
  • Laziness: During update, don't eagerly compute the value; only compute it when fetched

I think memoization and caching are the most important ones, since laziness is only useful if the UI doesn't fetch the value, which I think should happen very rarely, if at all (but I may be wrong).

In any case, instead of calling it a variant of the reserved keyword lazy, we can call it memoized or cached (I think memoized is the better of the two, though cached is shorter).

To be honest though, I would still prefer lazy since it's a known Elm(ish) concept. I may actually like lazy' best of all suggestions so far.

Elmish.React has something similar and called it lazyView. We might call it lazyBinding. Though since the usage would likely be Binding.lazyBinding, I don't particularly like that.

@TysonMN
Copy link
Member

TysonMN commented Nov 10, 2019

I (think I) have an idea that will make it moot. I will create a new issue soon.

For clarity, that issue was #181, and PR #183 resolved it.

@ScottHutchinson
Copy link
Contributor

I'm using Bindings.SubModelSeq with code based on the SubModelSeq sample. Our tree has over 6,000 nodes total (not top level) in the worst case. I'll get you a minimal working example as soon as I can.

@TysonMN
Copy link
Member

TysonMN commented Jun 11, 2020

See below the slow functions called by the binding code (based on the subModelSeq sample). The preorderFlatten function might be the one taking most of the time.

Oh, yes. Another point from your original comment that I didn't fully appreciate. I agree. This implementation is quadratic. To be clear, the main inefficiency is with the sample code, which I presume you copied into your actual project. I don't think this is a good example to motivate the present issue.

I created a MWE in this branch. It contains 1000 top-level nodes and takes about four seconds to increment a counter.

I will see if I can find a quick fix.

@TysonMN
Copy link
Member

TysonMN commented Jun 11, 2020

Hum...I pushed a commit that drops the time from four seconds to one when there are 1000 counters. But it still takes longer than I am willing to wait for 6000. I will continue to investigate.

@TysonMN
Copy link
Member

TysonMN commented Jun 11, 2020

I pushed a commit where I did a minimal amount of work implementing lazy behavior for SubModelSeq to test the performance of the modified SubModelSeq sample. It now updates instantly :)

@ScottHutchinson, can you try this change with your private project and let us know what the performance is like now?

@TysonMN
Copy link
Member

TysonMN commented Jun 11, 2020

FYI: I just tested with the lazy optimization in Elmish.WPF I implemented second without the caching optimization I implemented first in the SubModelSeq sample. The update time was still instant, which means users don't have to implement that caching behavior in their application.

@TysonMN
Copy link
Member

TysonMN commented Jun 11, 2020

I created a MWE in this branch.

The changes are in this branch, which is in my fork of Elmish.WPF. Sorry for the confusion. You can add my repo as another remote.

@ScottHutchinson
Copy link
Contributor

ScottHutchinson commented Jun 14, 2020

I'm looking at your changes to the subModelSeq sample. Is the new createParentsByChildId function needed for the performance improvement? I don't understand it yet, so I don't want to port it to my app unless it's useful. EDIT: I'm thinking that all I need is the new itemEquals parameter for the subModelSeq binding. I'm going to try to test that in my app today. Thanks

@TysonMN
Copy link
Member

TysonMN commented Jun 14, 2020

Sorry for the confusion (again!).

I implemented two optimizations, which are independent of each other. I have now pushed them to separate branches to help emphasize that.

  1. The first one is in user code and is the one to which you were referring. You can test just that change for the SubModelSeq sample in the branch testing/large_TreeView_slow_optimize_user_code.
  2. The second one is in Elm.WPF and is sloppily implements the lazy behavior described in this GitHub issue. You can test just that change for the SubModelSeq sample in the branch testing/large_TreeView_slow_optimize_Elmish.WPF BUT you can ALSO test this optimization on your private application without any changes to your application using the same branch.

So in an attempt to be as specific as possible, can you test your private application on the branch
testing/large_TreeView_slow_optimize_Elmish.WPF?

@ScottHutchinson
Copy link
Contributor

ScottHutchinson commented Jun 14, 2020

No worries. It's just me being slow, and my git fu is not so good.

OK. I think optimization 1 is changing the childrenCountersOf function to not call the preorderFlatten function, so I will try that first in my app. EDIT: problem is my model does not have the NodesById field. So maybe I'm better of testing optimization 2 first, and if that fixes it, then I don't need to bother changing my model to test optimization 1.

Then I will figure out how to point my app to your branch to test optimization 2, which I think requires the new itemEquals parameter for the subModelSeq binding.

@TysonMN
Copy link
Member

TysonMN commented Jun 14, 2020

OK. I think optimization 1 is changing the childrenCountersOf function to not call the preorderFlatten function, so I will try that first in my app. EDIT: problem is my model does not have the NodesById field.

I recommended trying optimization 2 first because my testing showed it gave a significantly bigger improvement.

...you can ALSO test this optimization on your private application without any changes to your application using the same branch.

I will figure out how to point my app to your branch to test optimization 2, which I think requires the new itemEquals parameter for the subModelSeq binding.

Oh, yes. You are correct. You need to supply that argument. I suggest using refEq as I did in the SubModelSeq sample (unless you are representing change via mutation, in which case things are much more complicated).

@ScottHutchinson
Copy link
Contributor

I referenced the Elmish.WPF package built from your branch in my app, and added the two (fun (_, a) (_, b) -> refEq a b), arguments just like in the sample. But it did not work. The only checkbox that works reliably (i.e., the GUI changes the checked state visibly when clicked) is the root node. All the other checkboxes in the other nodes do not react.

@TysonMN
Copy link
Member

TysonMN commented Jun 14, 2020

Huh :(

I suppose I could have introduced a bug to cause that, but I don't know what it would be off the top of my head.

I could increase my priority of this present GitHub issue and then implement it (bug free!).

In the meantime, we can make progress on your specific issue when you are able to create a MWE for us to investigate.

@ScottHutchinson
Copy link
Contributor

My "MWE" is at https://github.com/ScottHutchinson/Elmish.WPF-1/tree/ShowDialog_subModelSeq

This branch includes a new sample project ShowDialog.subModelSeq, which will show a tree with over 6,000 nodes and lots of checkboxes that work very slowly.

I started to add code to measure the elapsed time for a checkbox binding to complete, but I couldn't figure out how to make it work.

By the way, the window also loads slowly. But I can live with that, since this number of nodes is the worst case, while most of the cases have fewer than 1,000 nodes and load much faster. I'm much more concerned about the responsiveness of the checkboxes.

I thought I did the rebase correctly, but after I pushed the new branch, GitHub says "This branch is 1 commit ahead, 91 commits behind elmish:master." So I probably did something wrong.

@TysonMN
Copy link
Member

TysonMN commented Jun 15, 2020

I locally rebased your branch on master without problem. When I further rebased it on my branch testing/large_TreeView_slow_optimize_Elmish.WPF and passed in values for itemsEquals based on refEq, I got the same behavior that you described...the UI is very responsive but checkboxes are not changing their state when checked.

I will continue to investigate.

@TysonMN
Copy link
Member

TysonMN commented Jun 15, 2020

I figured out the problem.

First, you definitely need the lazy behavior for which this GitHub issue exists. A special case of a tree is a list. If all your nodes where in a list (i.e. if every "real" node in your tree had the "dummy node" as its parent), then you would avoid the buggy behavior related to trees that I discuss below but still need the optimization to significantly decrease the runtime.

Second, my sloppy commit paired with a value for itemEquals like fun (_, a) (_, b) -> refEq a b seems to correctly implement this lazy behavior for explicit trees but not for implicit trees. By "implicit tree", I mean a tree in which each node does not contain direct references to its children. When viewing a list as a tree, each node has no children, so every list is also an explicit tree.

Passing in fun (_, a) (_, b) -> refEq a b for itemEquals ignores changes in children since the tree is implicit. So for your MWE, you can instantly change the checkboxes of the top-level node(s), but to change a checkbox of any other node, you also have to make some change in every parent of that noded (i.e. in its parent, and its grandparent, etc.).

The SubModelSeq sample stores an explicit tree in its model but then converts this explicit tree into an implicit tree in the binding, and then gives the resulting implicit tree to Elmish.WPF. One of my first contributions to Elmish.WPF was changing that sample's model to store an explicit tree. Now I am thinking that another good change is to also use the explicit tree in Elmish.WPF.

I will create a new issue for that part.

@TysonMN
Copy link
Member

TysonMN commented Jun 15, 2020

I created an issue about how to handle the case of trees. It is issue #236.

@TysonMN
Copy link
Member

TysonMN commented Dec 21, 2020

I thought more about the naming. For now, I am going to use lazyUpdate. This is helping me remember that this doesn't save any computation when getting the value (which could be called lazyGet) but instead adds a short-circuit predicate to the update logic.

@TysonMN
Copy link
Member

TysonMN commented Dec 21, 2020

This feature now exists in the branch composable/Lazy/implementation.

Here is an example of it working using the branch composable/Lazy/example.

In this example, I implemented the map function to delay for a second. Changing the counter value always takes a second, but changing the step size executes instantly because lazy behavior avoids the delay in map.

2020-12-21_08-55-57_437

If I remove the laziness, then updating the step size also ensures the delay.

2020-12-21_08-57-13_439

As stated in the OP, a great place to add this effect is "on top of" a SubModel or SubModelSeq binding.

@TysonMN TysonMN closed this as completed Dec 21, 2020
@TysonMN
Copy link
Member

TysonMN commented Dec 21, 2020

This features now exists in the branch composable/Lazy/implementation.

One change I want to make is to address the comment I left here (with typos fixed and expanded a bit below).

Instead of iterating over the bindings three times to update properties, commands, and validation errors, consider iterating over the bindings once and updating those three things for each binding. Then the LazyUpdate predicate can be executed once for each binding for each call to UpdateModel.

Another advantage of this change (as I plan on making it) is that the resulting code will do a better (but still not perfect) job of separating pure from impure code.

I have to be careful to preserve behavior. There is a a lot of impurity there. I know of three sources (not "directly" in UpdateModel).

  1. Implicit dependence on CurrentModel.
  2. ObservableCollection changes in the OneWaySeq and SubModelSeq bindings.
  3. The calls to setError and removeError in updateValidationError.

@cmeeren
Copy link
Member Author

cmeeren commented Feb 16, 2021

There was a discussion a mile up this issue about a name for lazy since it's a keyword. I just thought of laz. Works fine with intellisense (unlike lzy), and with a bit of goodwill it can be pronounced like "lazy" (think z = "zee").

No strong opinions, just felt like writing it down before I forgot.

@TysonMN
Copy link
Member

TysonMN commented Feb 16, 2021

with a bit of goodwill it can be pronounced like "lazy" (think z = "zee").

Maybe. The actual pronunciation of laz would be like this.

What about lazyy?

@cmeeren
Copy link
Member Author

cmeeren commented Feb 16, 2021

Maybe. The actual pronunciation of laz would be like this.

Well, I wasn't thinking it was "proper" pronunciation; just that if you squint a little, laz can be thought of as la-Z, i.e. la-zee (yes, yes, I know half the would would think la-zed).

lazyy is possible, but I'd say little better than lazy'.

@TysonMN
Copy link
Member

TysonMN commented Feb 16, 2021

I will go with lazy' as the "working" name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants