Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upUse async functions in script? #22501
Comments
|
So I did a bit more thinking, and while yes we could do this, it's not obvious whether it'd get us very much. The only time we'd get an increase in throughput is when there's more than one pipeline for a given script thread, then while one of them is waiting on an async function call, the others can get on with work. At the moment, this means similar-origin iframes, pipelines in the session history, and tabs. I'm not sure whether the increase in throughput in that case is worth the complexity. I suspect a bigger win would be not locking script/layout every time we call |
|
To me the benefit is soundness in the case of GC: it lets us conservatively root (instead of rooting everything everywhere) and the compiler can catch our mistakes (instead of a bunch of imperfect lints). |
|
Something I've been saying for a while is that I hope our script thread runs futures. This makes it so much more easier to implement tasks queued on task sources, since they map nicely to futures and executors. It also makes it easier for us to compose tasks together; see https://html.spec.whatwg.org/multipage/webappapis.html#internal-module-script-graph-fetching-procedure for a good example of how useful futures could be. Essentially, I think any algorithm that interacts with another thread, be it layout, net or constellation, would benefit significantly if we can encapsulate it with async rust. |
|
@Manishearth, that would be great, unfortunately the feature that lets us do that (unpinnable async functions) doesn't have language support yet AFAIK. @KiChjang, yes there might be benefits to moving to async anyway, even if it doesn't get us that much efficiency. I think I'll try a little experiment and see what it looks like in code. |
|
An experimental implementation of the idea... https://github.com/asajeffrey/servo/tree/script-async-experiments |
|
@asajeffrey Looking at your experimental implementation, just wanted to add that the microtask aspect would bring additional complication. Currently if To prevent this, I think you'de have to drain the microtask queue at the start of And while the pipeline is waiting, I think you would need to exclude it from here as well(although I'm not sure if in practice these callbacks can be enqueued across pipelines...) |
|
@gterzian Yes, I'm not sure how to use async/await and still be spec-compliant, sigh. Perhaps we could use a scheme where we don't re-enter JS if we yield'ed during JS execution? So native tasks could be scheduled, but they wouldn't be visible to JS until the current JS execution returns? It does mean that every call back into JS from native would be async, sigh. |
In theory, everything that affects JS would happen via a task enqueued on the "HTML event-loop", so one way to look at it is that our "event-loop", which also deals with messages that aren't "HTML tasks", could keep running and that only the tasks could be held back. I do think there are places were we don't schedule tasks(and that isn't necessarily spec compliant), for example when we "traverse the history by a delta", we just do it directly in response to a message from constellation, while the spec says to enqueue a task(see #10994) The current task-queue is capable of throttling "non-priority" JS tasks, and if you were to limit the "suspension after yielding" to JS tasks, you could replace the "buffer" you've introduced in the experiment(which buffers native messages as well as JS tasks) with an additional mechanism inside the task-queue. The task-queue could also implement I also think that unless the underlying lock is future aware itself and would "wake" the waker when it becomes available, you're going to have to poll of the future at each turn of the event-loop, and wake up the event-loop at each failed try as well, to prevent the situation where there are no more other tasks coming in yet the lock has become available and the awaiting task should run. Its really similar to what the task-queue is doing for throttling non-priority tasks. If you're throttling anything, you must wake-up the event-loop because otherwise there is no guarantee another message would wake up the select at the start of the next iteration. It would be different if the lock itself could "wake" the waker when the layout thread has dropped the lock, which would allow you to avoid this poll/wake cycle at each turn of the event-loop(I guess this would require sharing the On another note, as discussed earlier in this thread, I do agree that any positive effect in terms of "throughput" from such a setup, while remaining spec compliant, would appear to be limited. In the case of unit of related similar-origin browsing contexts they must "share one event-loop", which would seem to preclude suspending tasks for one pipeline while running the others(unless the spec would actually say to "spin the event-loop" and wait for the lock to become available) And while our current "sharing of event-loops" is broader, since we put all same-origin pipeline in the same event-loop(even if they are not "related"), an easier solution to reduce the number of pipelines blocking due to one of them waiting on a lock would be to make the sharing of event-loops more restrictive(limiting it to a "unit of related similar-origin"). Our current somewhat broad "by origin" sharing of event-loop is also something we've agreed to as a temporary "easy" solution to implement Perhaps a solution to the problem of script blocking on layout queries would be to actually try to change the spec, or at least add something to it, which would acknowledge the issue and allow script to query layout in an async way at the JS level(for some queries for which this would be feasible)? Is there a reason why getComputedStyle cannot make the CSSStyleDeclaration available via a callback or a promise? And any modification to the CSS could also take effect via a task enqueued and handled at a susequent iteration of the event-loop? |
|
By the way, here is an example of something that now blocks script on layout, but actually could be refactored in a more spec-compliant way to not block on layout: #21600 |
|
The answer to your question about getComputedStyle is the huge amount of legacy code on the web. |
|
@jdm Ok, thanks for the info. There is one last thing I thought of(I promise I'll stop writing after): one problem with "waking" script that a future is ready from layout, is that this doesn't guarantee that the lock will not have been re-aqcuired by layout by the time script wakes up, unless layout would block in some way waiting on the lock to be acquired by script! I think there is an actual fundamental problem here, which is that the waking mechanism of futures is based around the idea that an awaiting "task" that is woken up will eventually be scheduled to run to handle a result that is ready, without offering any further guarantees. So that works well if you're waking something up because a "result" is ready and will always be from then on, it works less well if you're signalling that a lock can be acquired, since that's not necessarily true anymore when the waking takes place unless you add blocking on the other end. If that holds up, it means that you're going to have to try to acquire the lock, and if not successful wake-up the event-loop, at each turn of the event-loop, and that the use of the future api to do that is not adding anything to what is essentially a try of the lock at each turn of the loop... |
|
@jdm Actually, allowing a callback is possible, see w3c/csswg-drafts#3579. |
|
@ExE-Boss You are correct that it is possible to define it that way. My point is that we still must continue to support the existing synchronous behaviour since so much legacy code depends on it; proposals for new asynchronous behaviour do not solve that. |
|
Re what I said earlier about locks being hard to use in the context of futures, check this out, looks like a potential solution: https://docs.rs/futures/0.1.25/futures/sync/struct.BiLock.html |
|
@asajeffrey It seems there is some discussion going on over at whatwg/html#3727 which seems to address a very similar issue. |
Can we make use of async functions in servo script?
Two possible motivations:
@Manishearth suggested in IRC https://mozilla.logbot.info/servo/20181214#c15724990 that we could mark any function that GCs as async, which would allow us to statically track which functions can GC, and require JS objects to be rooted across async calls.
We currently have script and layout using a lock, so if any pipeline in a script thread calls into layout, this blocks all pipelines in that thread.
For using async to track GC for rooting, it looks like this is listed as an unresolved question at https://github.com/rust-lang/rfcs/blob/master/text/2394-async_await.md#async-functions-which-implement-unpin, in that async functions are not required to implement
Unpin, which is what allows references to surviveyieldand henceawait!. IRC chat: https://mozilla.logbot.info/servo/20181219#c15741304For supporting less blocking, I think we could do this by having each script thread track which pipelines are currently waiting on layout, and only process messages for pipelines which are not waiting. Then when a pipeline calls an async function which yields, we tell the script thread to add the current pipeline to the waiting pipelines, and then go back to processing messages until the async function returns.
Something like this might work, but it's possibly not spec-compatible, since if there's a pipeline which runs something like:
then other same-origin pipelines should never see
foo.class == "this", but they can if we allow other pipelines to execute while one pipeline waits on layout.We could make it spec-compatible by dealing with groups of pipelines, and putting two pipelines in the same group if they access each others DOM objects (e.g. through
parent). This is quite a bit of extra complexity though, and it's not obvious that the extra throughput is worth it.