Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Track Stats API] Rephrase sentence on when to update internal slots #108

Closed
henbos opened this issue Sep 12, 2023 · 14 comments · Fixed by #127 or #130
Closed

[Track Stats API] Rephrase sentence on when to update internal slots #108

henbos opened this issue Sep 12, 2023 · 14 comments · Fixed by #127 or #130
Assignees

Comments

@henbos
Copy link
Contributor

henbos commented Sep 12, 2023

During the TPAC presentation of PR #106,

it was pointed out that there is existing language for the concept of only updating a variable when tasks are not executing ("stable state"?)

It's better to refer to existing concepts than to have our own language for the same thing.

@henbos
Copy link
Contributor Author

henbos commented Sep 13, 2023

@padenot Do you have an example of the kind of terminology that we should adopt?

@henbos
Copy link
Contributor Author

henbos commented Sep 13, 2023

For reference, RTCRtpReceiver.getContributingSources() which is also a cached getter returning realtime information has the following note:

As stated in the conformance section, requirements phrased as algorithms may be implemented in any manner so long as the end result is equivalent. So, an implementation does not need to literally queue a task for every frame, as long as the end result is that within a single event loop task execution, all returned RTCRtpSynchronizationSource and RTCRtpContributingSource dictionaries for a particular RTCRtpReceiver contain information from a single point in the RTP stream.

This note uses the wording "single event loop task execution". The chromium implementation clears the cache using a microtask. I don't know if there is a fine distinction here between micro task and task execution cycle, but since micro tasks run in-between macrotasks I assume clearing in a microtask gives the right behavior of clearing before the next task execution cycle?

@henbos
Copy link
Contributor Author

henbos commented Sep 15, 2023

I will provide a PR when language has been suggested

@henbos
Copy link
Contributor Author

henbos commented Oct 28, 2023

We now have references for the language requested:

https://html.spec.whatwg.org/multipage/webappapis.html#await-a-stable-state

However we also have an issue that claims this is misleading: whatwg/html#2882

@henbos
Copy link
Contributor Author

henbos commented Oct 28, 2023

I don't think this is ready for PR until we have a proposed language to use

@henbos
Copy link
Contributor Author

henbos commented Oct 30, 2023

Quoting @jan-ivar so that this comment doesn't get lost. Let's follow up this specific issue here.

To clarify, Paul said:

  1. "please rewrite this in terms of an algorithm running in parallel and stable state"
  2. "Alternatively you can post a task from your algorithm running in parallel, that's more or less equivalent here.

My reason for mentioning whatwg/html#2882 was to suggest going with 2.

They're just different ways to write algorithms in the spec that involve synchronous steps AND async steps.

To apply a JS analogy to spec writing: 1 is like async-await, and 2 is like promise-then. 1 is out of favor, because 2 is closer to how you would need to implement things e.g. in c++.

I suppose this could either be phrased as "In-parallel, whenever frames are dropped or delivered, run the following steps: queue a task update frame counters", or it could be phrased as "queue a (micro?)task to clear an internal slot for [[IsCached]]".

I think the latter is closer to what an implementation would be doing, since members of the WGs don't like the idea of excessive post-tasking. What is your preference?

@jan-ivar
Copy link
Member

jan-ivar commented Oct 30, 2023

@padenot when this was discussed at TPAC I think we had concerns about over-specifying this requirement since we saw at least three ways to implement it:

  1. for every in-parallel value change, queue a task to update internal slots
  2. in the getter, queue a microtask to clear cached values (i.e. at end of this task, sort of, but see below)
  3. in the getter, let id be a number unique to the current task, and compare it against (this new internal slot e.g.) [[CurrentTaskId]] and update all internal slots if different

1 suffers performance issues (lots of needless tasks), and risks swamping the task queue if poorly implemented.
2 is imperfect since running at the head of the microtask queue misses code that runs on it (e.g. any code after await 0) — running at the end of the microtask queue would be better, but is hard to do/specify.

I think 3 is closest to how one might implement this most efficiently, and has the least side-effects that I can see. So I vote we specify that, with the aforementioned algorithm disclaimer that any algorithm that has the same behaviour is fine.

@henbos
Copy link
Contributor Author

henbos commented Oct 31, 2023

Something like 3) sounds good to me and is in fact what was argued in the WG would be the simplest most efficient way to do it that doesn't involve post-tasking. Because we are saying "unique to current task", it doesn't matter whether we're in a microtask or a new task execution cycle, ANY task change would invalidate the caching? I think this makes the most sense as it avoids subtle differences in which context it runs while ensuring track.stats.totalFrames == track.stats.totalFrames is always true.

It's not exactly what we've implemented (for simplicity's sake we just queued a microtask to reset a boolean), but I think it makes sense to update the implementation to use [[CurrentTaskId]].

Shall I make a PR?

@henbos
Copy link
Contributor Author

henbos commented Oct 31, 2023

PR: #127

@alvestrand
Copy link
Contributor

This seems like we're being creative - inventing the concept of an ID that is unique to the current task (task or microtask?).
Is this in conflict with the goal of using concepts that are well known in the platform, or is there such an ID elsewhere in the platform that we can use?

@henbos
Copy link
Contributor Author

henbos commented Oct 31, 2023

If we prefer to stick to existing concepts, the PR could say "queue a microtask", and the implementation would still be free to implement this using micro task IDs as long as the behavior is the same. One upside is that it would match the existing implementation.

I don't have a strong preference though, this is almost editorial... but not quite, because depending on which exact concept we use, this either could or could not get cached in-between micros

@alvestrand
Copy link
Contributor

Editors meeting decided that we want an issue filed somewhere to ask whether the concept of a "current task id" exists elsewhere in the platform - and if not, if one should be created. @jan-ivar to follow up.

@henbos
Copy link
Contributor Author

henbos commented Nov 2, 2023

Status is we merged the PR, the question is if we can get something to reference, but as-is we do use the current task id language which should resolve the immediate request

@henbos
Copy link
Contributor Author

henbos commented Nov 14, 2023

Jan-Ivar to the rescue: #130

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants