New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Exploration] Use a run loop observer to trigger render passes asynchronously #907
Conversation
We looked at doing this on Android a while back. One issue with asynchronous render passes is that you can basically end up in a state where you're running the wrong action. If you have a workflow that renders the same child in different states, with different actions specified to handle outputs in those states, like this: when(state) {
is Foo → renderChild(Child) { FooAction }
is Bar → renderChild(Child) { BarAction }
} If |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We looked at doing this on Android a while back. One issue with asynchronous render passes is that you can basically end up in a state where you're running the wrong action.
If you have a workflow that renders the same child in different states, with different actions specified to handle outputs in those states, like this:
when(state) { is Foo → renderChild(Child) { FooAction } is Bar → renderChild(Child) { BarAction } }If
Child
emits outputs twice synchronously (e.g.Child
subscribes to the same signal twice) and we don't render for each update, then both outputs will be handled byFooAction
. This would be weird because if I'm debugging this, I'm going to seeFooAction
handle the first output, and my state update, but then whenChild
outputs the second time, even though I'm in theBar
state I'm going to seeFooAction
run again.
This is a fair point, and arguably the biggest unknown of this change has to do with how we handle no-longer-applicable branches of the workflow tree (e.g. logout destroys a whole subtree — is processing actions of those nodes a bad thing?).
I wonder if we actually have this kind of usage that you bring up, Zach, in practice.
I can't say, but this definitely came up a while back on Android, someone actually ran into it and reported it as a bug. |
Anyway, I think a more robust solution to this is queueing. But not necessarily the queue-per-worker model that the Kotlin implementation uses. Queue-per-worker is actually more complicated and probably less performant than necessary. A simpler implementation could involve a single, global (to the instance of the workflow runtime) queue of structs that look something like: data class PendingUpdate(
val isDisposed: () -> Boolean,
val applyUpdate: (Any) -> Any?
) When a Signal/SignalProducer is run for the first time, you subscribe to it by enqueing one of these structs: lateinit var subscription: Disposable
subscription = signal.subscribe { value ->
queue.enqueue(PendingUpdate(
isDisposed = { subscription.isDisposed() },
determineActionAndApply = {
val action = actionForValue(value)
action.apply(value)
}
))
} Where This means all workers and signals are only subscribed once, the first time they are ran/subscribed, and the node just needs to keep track of all those disposables so it can dispose them when the workflow is torn down. Rendering event handlers would do something similar, but instead of checking if a subscription is disposed they just need to check if the workflow is still active. Then at the I think this implementation is actually not that much work, especially since you don't have to worry about thread safety as long as you ensure all signals/workers emit on your main thread. You get all the queueing behavior you need, and render passes still keep up with updates. I just ran this by @AquaGeek and @rjrjr in person, and it passed the sniff test for them. KotlinThe more I think about this the more I want to try implementing this in Kotlin, with a few tweaks:
|
Well, I patched this against |
Good to know – collection view resolves updates during |
I spent some time trying to track down exactly where things get out of sync, but I haven't been able to pinpoint it yet. For now I'm closing this pull, as I don't think this is a viable solution. I'm working on building out a sample app with both KIF and XCUITests in it to try and flush out the issue/provide a stable base that we can use to ensure we don't regress in the future. |
This changes the render pass to be asynchronous instead of synchronous and will happen at most once per run loop. There are likely issues with how this is set up, but I wanted to get some eyes on it sooner rather than later.
I believe this addresses #402.