New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Track Stats API] track.getFrameStats() allocates memory, adding to the GC pile #98
Comments
The API shape (sync vs async) is a separate question from which metrics it returns.
The API shape is something that we already got WG consensus on. The track.getFrameStats() API was discussed during multiple Virtual Interims, including if it should return a promise or not. The reason we made this async is we wanted to make as little assumptions about the browser stack as possible. For example both Safari and Chrome has separate capture processes, so this is not just a mutex problem but an IPC problem as well. I invision this method being called once per second per track, if this can be abused it might be appropriate to throttle or cache queries. But if GC is a concern we should take seriously, how about storing the result of the (async) query in an interface like so?
|
Consensus must be confirmed on the list, so we can receive broader input. Adding audio statistics I think warrant input from audio experts. IPC is an interesting problem, but how does Chrome and Safari implement APIs like audioContext.outputLatency? Regarding the API shape you propose, it seems user-error prone. E.g. what would the following produce? const stats = new MediaStreamTrackStats();
await Promise.all([trackA.getStats(stats), trackB.getStats(stats)]);
console.log(stats); // trackA or trackB stats? May I suggest, if I understand the concern, that the following seems sufficient? await track.audioStats.refresh();
console.log(track.audioStats); But even then, I think we should first make sure user agents can't implement the following performantly: console.log(track.audioStats); It seems to me a user agent could detect repeated reads and optimize for accuracy then. |
A cache would emulate run-to-completion to some extent, but it would not provide consistency with other APIs or objects like run-to-completion would. Calling |
Do we have a sense of the threshold number of JS objects per second would constitute a performance concern for a real-time application? In the envisioned use cases, this API producing more than 50 objects per second is on the extreme end of the spectrum. |
I think we will reach resolution on this matter reframing the discussion around use-cases, as @henbos rightfully points out. It seems like the initial use-case was about being able to assess quality (and please correct me if I'm wrong -- I'm late to the discussion and I have probably missed a number of discussions). There are numbers of statistics that are proposed:
For each of those numbers (audio and video), it's important to know why they exist, to then be able to know how to design the API, here's what I'd use them for. In addition, an estimation of the frequency at which those APIs might be called:
Please either agree or propose your own use-case and frequency. Of course, the use-case that is the hardest will guide the API design, because it's always possible to implement simpler use-cases on top. Now, in terms of possible API design, we're talking about getting a small number of integers or floating point values from somewhere in the web browser to either the main thread or a worker thread, possibly cross-process. This information is metadata about the data that flows. The data that flows is order of magnitudes bigger than what we need here, and flows at a rate that is a lot higher than any of the calling rate that has been mentioned. Updating those values unconditionally per track is not going to impact the performance profile of the implementation. In particular, some of these values are already shipped with the data (audio latencies, timestamps, etc.) in current implementations. There is no need for synchronicity via promises, because the values are already there. Having an async API is also absolutely not ergonomic, and can even lead to problems if the value is used for synchronization and the event loop is busy. Finally, we can decide to preserve run-to-completion semantics (like most attributes on the web platform). This can be done by only updating those values during stable state, and updating them all at once. We can also decide to not preserve run-to-completion semantics (à la e.g. |
We want to poll the API once per second or less. If there are other use cases we can unblock we should discuss that, but I do not see any reason to poll it more frequently. More about this below.
Yes.
Detecting quality issues and notifying the user is also a possibility, but this can be done at 1 Hz or less frequently.
Aggregation does not exclude getting a value each time. Aggregated metrics allow you to get averages over any time period because you can calculate averages as "delta metric / delta time". If you want the average frame rate every time you call getStats() at any desired frequency and apply the formula This is a lesson learned from WebRTC getStats() API: when we exposed instantaneous values (e.g. "bitrate" instead of total "bytesSent"), apps got into this bad habit of polling the API at super high frequency just to make sure they didn't miss any dips. When the API was standardized we used aggregated counters, and apps could reduce polling frequency and still detect dips even when polling once every few seconds. This is important for performance. Not having to poll APIs unnecessarily often gives the CPU more opportunity to reach idle power states and is a generally a good idea to reduce CPU frequency scaling. Reducing scheduling has been a successful performance optimization strategy in Chrome, so I would prefer we don't encourage apps to poll the API more often than needed by exposing instantaneous values. (This is primarily of concern when the app has its own timer, and less of a concern when the app is already responding to existing events in which case the CPU is not idle anyway.)
Frame drop events are needed to tell the difference between if there was one big drop and then we recovered or if there was continuously smaller glitches happening evenly throughout the call, for instance. If we don't provide this for the app, the app may poll the API 10 times per second instead, preventing CPU idling.
For the use cases I've mentioned this does not need to be called frequently because you can using the "delta total latency / delta total samples" compute the average latency experienced for the desired time interval (e.g. 1 second or 10 seconds). But if this can unblock more use cases then that should be discussed. AEC and A/V sync for instance has traditionally been solved by the browser (at least for WebRTC), but the trend seems to be to let the app to control more and more of the stack.
Why would you want to poll it in every
As previously mentioned, the "delta X / delta Y" pattern means you never have to reset any counters because you are already in full control of what interval you are looking at. This is actually another lesson learned from the WebRTC getStats() API: different parts of the code may be interested in metrics for different reasons or under different circumstances. We don't want calling the API to have side-effects such as resetting the counters, because then that means that one consumer of the API affects all other consumers of the API. This has downsides without upsides.
This is a good point. If you "piggyback" metadata onto the data that is either way flowing then you don't have to increase the number of IPC calls. But what if the browser does both capturing and rendering in the GPU process? Then there are no IPC calls to the JavaScript process. Such browser might throttle the frequency of IPC updates, and then the synchronous API is no more powerful than a throttled promise-based API. But if this is a moot point, then synchronous vs asynchronous might not matter as much as I have been thinking it does. It's hard to have an opinion without knowing everyone's implementation. I think one of the reasons I hesitate with synchronous API is that at 1 Hz polling frequency, it is hard to imagine the GC issue causing any real problems, but I've been surprised before. |
TL;DR version:
|
This may mean defining the value buffer as some other object than "dictionary"; I don't know how BYOB semantics works with other APIs. |
Can you explain which use cases make this critical? FWIW, the current promise-based no-byob approach seems ok to me. For things like input latency/AEC, I would expect the API to be exposed where the actual processing is happening (say Audio Worklet, or MediaStreamTrackProcessor or VideoFrame...), not through MediaStreamTrack. |
Getting multiple metrics at the same time is important for when subtracting one value from the other, if taken from different points in time you’ll get errors. Timestamp is needed for ”delta seconds” when calculating rates (e.g. frames per second).
I think these use cases also should happen on real-time threads and with different metrics than has been discussed for adding to track, so I’m not sure how much overlap there is between the different use cases discussed. |
Some points I haven't seen mentioned:
IOW, I don't think the facts support there being a big difference either way implementation-wise, and that much of this seems to come down to choice and consistency, where the Media WG and WebRTC WG have diverged. MediaCapture is sort of in the middle — based on WebRTC's unsuccessful attempts to pawn it over to the Media WG — so it makes sense for this to come up here. We seem to be arguing about § Consider whether objects should be live or static, and I'd rather we decide based on end-user ergonomics than implementation difficulties I'm not seeing. This also seems like a more general discussion over API — future of attributes? — so it might be good to solicit advice from a broader audience. |
Agree let's get back to the original topic. But I do think aggregate counters are an important feature because as we learned by pc.getStats(), we need to be able to calculate average rates over user-specified time intervals.
I think I like
What do you think? |
I haven't looked precisely at the implementation strategy we would use for those stats.
MST can be transferred to workers, not audio worklets, which are really real time threads where GC might be a bigger issue.
Agreed if we start to design things differently than what is being done today. There are plenty of APIs (say rvfc) which are creating dictionaries/objects for every call, and these calls can be made at 30Hz/60Hz. I do not see this particular API as more expensive/requiring more optimizations than existing APIs. If we are trying to optimize things, we need to have a clear understanding at what we are trying to achieve with testbed/measurements and so on. If we do not have this, I prefer we stick with existing API design. Also, BYOB can be quickly broken if we desire to extend stats (new objects, sequences...). |
FWIW my recent experience outside of WebRTC is folks seem to prefer standard metrics over custom math, but YMMV.
I don't think it necessitates that. You do: const {foo} = track.videoStats;
await wait(1000);
const foops = track.videoStats.foo - foo; But sure if the main use case is bulk data collection to feed some graph library, then static copies may have an edge since they're desired.
A refresh method (which I introduced) is not without usability pitfalls either, the main one being unaware it exists. After my last comment, I'm not convinced implementation concerns should outweigh ergonomics here.
Are downstream audio sinks also out of process? If getting these metrics is costly, are we adding them in the wrong place? Having JS trivially trigger IPC repeatedly seems like a bad idea.
Convenience seems a weak argument. If the information is already available, why do we need a new API?
The Web Platform design principles suggest dictionary use for inputs, not outputs, and there's some history suggesting they were primarily designed for that. So I'd describe attributes as the conservative design choice, and dictionary-returning methods as new. It might be useful to have general guidelines for deciding when to use one over the other, and I'm observing that we don't have that.
I find this a compelling argument regarding performance concerns, but not a compelling argument against attributes as the simpler choice. |
Most downstream audio sinks are going to the content process in WebKit, but they might not have all data available out of process. For instance, audio drops are not known from audio sinks in WebKit's current implementation. The same could be said about discardedFrames for camera capture.
Current APIs seem to induce that both are conservative design choice. Looking at MediaTrackFrameStats, we knew from the start that we might add non delivered frame categories in the future. This validates the extension point.
AIUI, the new information provided in this thread is specifically about performance concerns, in particular GC in realtime environments. Note again that MST audio stats would not happen in a realtime audio environment since MST is not transferable to audio worklets. Exposing such data in audio worklet is a different problem with different constraints. |
Right, I think the argument was this metadata could piggyback on the data without adding much overhead, if it led to more ergonomic web surfaces.
The characterization of the OP feedback as "real-time" was mine, and not meant to disqualify concerns outside of audio worklets. Rather, the OP points out lack of consistency with existing Media WG APIs like audioContext.outputLatency and mediaElement.currentTime which are capable of returning up-to-date information without async methods or dictionaries. It's worth considering consistency across the web platform and not just within a single WG.
Grouping and synchronization both seem orthogonal to this discussion, e.g. |
Sure, what I am saying is that using a dictionary is the current usual way of doing things, developers know it well. |
We are trying to expose capture metrics to the main thread or workers, this...
Some differences between the mentioned other APIs and our API:
I do see the appeal of synchronous APIs like this, but they are not suitable for metrics that update on every audio sample (e.g. every 10 ms) for measuring things in a separate process. It's funny that this is what we're discussing, considering the issue was about making the API more performant, not less. (And even if we could find an example of something that is bad doesn't mean we should copy it.)
Please let's not go down this path of assuming we can always piggyback. We are interested in moving consumers that today live on the renderer process to the GPU process for performance reasons, so it is a very real possibility that signals that we could piggyback on today may not be there in the future. We should not bake in assumptions about the browser architecture into the API unless we have to, so for an API that isn't even real-time it seems wrong to bake in the requirement that metrics are pushed in real-time to non-real-time contexts.
As previously discussed, I think total counters are superior to current rates because it allows us to measure between any interval we desire. I think some variation of the async APIs we have discussed is superior, in order of personal preference but any of them do it for me: |
Async APIs: suitable for grabbing snapshots from other contexts, possibly involving IPC. |
The point I was trying to make is that, should this info be useful for realtime environments (and it might), we should expose it there too, in a more efficient/synchronized way. This API would then become an optimized convenience. I tend to prefer |
Are you sure? Do you have examples outside of WebRTC? I couldn't find a comprehensive list so for fun I asked GPT4 and it found only a few odd ones that weren't interfaces and finally said "WebIDL dictionaries are primarily used for passing data or configuration objects as method parameters, and as such, they are less commonly used as return values." 🙂 I think the winner here is attributes.
The refresh method is a red herring. See OP. In my mind, the only two ergonomic options are, e.g. for video: const {deliveredFrames, discardedFrames, totalFrames} = track.videoStats;
const {deliveredFrames, discardedFrames, totalFrames} = await track.getStats(); |
It's "an approximation of the current playback position that is kept stable while scripts are running." to respect w § 5.2. I.e. it's an approximation because it's not allowed to advance while JS is running, not because it's inaccurate. As you can see here, it updates every ~10 ms in all browsers at what appears to be millisecond accuracy. No |
There's even a frame counter in the fiddle, though I fear it was never standardized: const frames = v => v.mozPaintedFrames || v.webkitDecodedFrameCount; I wonder what the story was there. |
We've narrowed it down :) that's progress. At this point, it's hard to argue that there is a big difference between the two from an ergonomics point of view. Tomato tomato. We have to things we need to decide:
From a purely ergonomics point of view I think we want to allow rewiring the above to the following:
This would support dictionary over interface based on ergonomics now that we've concluded GC is no longer a concern. But either will work, it's just that interface is more error-prone.
I think what's happening is we're piggybacking on existing signals. If we send frames directly from Capture process to GPU process and had no signals to piggyback on, would we still want to update every ~10 ms or would we relax it a bit? I think the answer to that question depends on the use cases: are we supporting real-time or non-real time contexts? |
About currentTime, I think there are use cases for this API for synchronising the web content with the media content being rendered, hence why it is aggressively updated. In the getStats API case, I do not think there is a desire to support any such scenario (we have other APIs for this kind of processing). |
Glad you remembered Hopefully I don't have to argue why synchronous attributes are simpler for developers than a dictionary promise method. If the extra interface is confusing we can of course put the attributes on the track itself: const firstDiscardedFrames = track.discardedFrames;
...
console.log(`Discarded frames: ${track.discardedFrames - firstDiscardedFrames}`); Also note there's no
I don't think we've concluded GC is no longer a concern. It would clearly happen in one API and not the other. Yes some browsers may be quick to clean up the garbage, and other methods may litter more, but that's no reason to create garbage if it's not helpful. I also don't speak for @padenot. |
This is not a valid argument, the data has to be in the content process anyway (see the exhaustive breakdown below). It's like saying any network data is out of content processes: it's true, but it's irrelevant because it has to be in the content process eventually, and sooner than later if the implementation is of high quality.
Real-time is a spectrum. There is hard real-time (like real-time audio processing), and there are softer flavours of real-time, such as graphics rendering at 60Hz. A digital audio workstation is of higher quality if it has this piece of information reliably on-time. Similarly, something that processes and renders video at 60Hz will want to know extremely frequently if there are quality issue, maybe it wants to back-off some expensive processing.
That is correct.
Those two APIs you're mentioning are both updating at a higher rate than 10ms: an
It's often not useful to make a general statement when we know everything about the specifics of a situation. Here we have complete information, and we can make a good API design that's better for users and developers, that's it. Please remember the priorities:
Being able to have accurate values quickly leads to better software. Having a sync API with attributes, that doesn't GC is more ergonomic than having a promise API with a dict. It's not hard for us browser developers to send a few dozen numbers alongside other pieces of information that we have to make available to the content processes, always, and that can never not be made available because script can and do observe those values. Let's comment on the proposed metrics:
In a competitive implementation, you're sending and receiving on the order of 128 * channel-count float32 each 128 / 44100Hz (this translates to sub-10ms roundtrip latency with e.g. a macbook). We're proposing to add a small number of integers, which are metrics about the audio data, each time we're sending this audio data. We're already sending some metadata anyways. For video, we're already sending metrics, and they are already available synchronously.
This is something we can discuss, but the current way of doing it (as specced in the draft) isn't satisfactory because it masks transient events: it forces authors to rely on time averages, which is inherently a low-pass filter.
async APIs are never superior, always slower, and never more ergonomic than sync APIs when the information is already present and updated in the content process, and it is, here. |
This is precisely why we need frequently updated input and output latency: to synchronize with media content being rendered.
I'll note that there are already commercial and free-software products [0] [1] [2] that go to great length to estimate audio input latency, and it's only best effort (a static number). Those products cannot function without an accurate audio input latency. [0]: https://www.w3.org/2021/03/media-production-workshop/talks/ulf-hammarqvist-audio-latency.html (Soundtrap developer) |
This issue was discussed in WebRTC May 2023 meeting – 16 May 2023 (Issue #98 track.getFrameStats() allocates memory, adding to the GC pile) |
I like async because it allows more implementations in theory. But in practise we do pass through the renderer process, so I think this is a matter of mutex versus PostTaskAndReply in Chromium. |
Youenn does Safari have its audio frames pass through the renderer process unconditionally? |
In Safari, each audio frame is not individually sent to the content process. Instead a ring buffer with shared memory is used. Camera video frame pixels are not sent to the content process, except in case of sw encoding or VideoFrame.copyTo which are async tasks. If we add more stats in the future, this will further increase the cost of sending data to the content process without the content process actually using it. |
Audio input / audio output will be done in web audio presumably, so in a worklet where there is no MediaStreamTrack. I understand the desire to have an API to get that info and that this info would be updated every time there is a new audio chunk to process. Should it be a separate API (that could include input and output latency estimates)? |
I hold the opinion that synchronous APIs should read from internal slots such that you neither have to use a mutex or maintain a cache in case the API is called multiple times in the same task execution cycle. I am in favor if sync API if the information is truly available on that thread, either because we are in a context where the information is available anyway (web audio worklet maybe?), or in a context where the app subscribes to an event that fires when the information is available. I think(?) we all agree that we don't want to PostTask on every frame to the main thread to update metrics, but note that if the app wants to poll the API frequently we still have to post microtasks on ever get() to clear the cache that we're forced to maintain on the main thread. This is what we had to do for getContributingSources(). |
To simplify discussion, nobody is suggesting PostTask. Queuing lots of task on main-thread would be a terrible implementation, risking queue buildup which might itself cause outdated information. Implementation would either be lock-less (or a mutex until that can be implemented properly), or however existing sync web APIs mentioned in this thread are accomplished, e.g. It might shed some clarity if someone could explain how those existing APIs are possible, and why we cannot follow their existing pattern (one of the metrics requested even appears to be the same: decodedframe count).
I believe this was answered off-thread, but for completeness, I believe the answer was: a lock-less ring buffer enables discrete access to consistent data sets.
This is what Firefox does as well I believe, which is what allows for lock-less access to metrics accompanying the data.
Right, but even if the pixels stay, I understand something representing them is still sent over IPC? The overhead would be to add a minimal set of metrics to that IPC, which seems all that's needed to support these APIs and stay consistent with existing patterns in HTMLVideoElement. |
This is the case now. Note that the presented usecase for this kind of data for realtime processing is audio latency, where we do not have IPC for each audio chunk and where an API at MediaStreamTrack level does not seem like a great fit. In that same vein, HTMLMediaElement.requestVideoFrameCallback might be a better API to expose information useful for realtime processing, or MediaStreamTrackProcessor/VideoFrame streams for video processing. Given this, I am still interested in understanding more about the desire to shape this API for realtime processing, given it would likely have an impact on performance. |
Except for a microtask to clear the cache every time the getter is used by JS, right?
If I'm code searching correctly (so IIUC...), this is how Chrome does it:
I can vouch for the fact that currently Chrome is aware of each audio and video frame in the renderer process because of historical implementation decisions. I'm not convinced the implementation would necessarily have to look this way, but Chrome is like this. So at least in Chrome, we could implement any statistics API with locks and microtasks to clear caches. |
While that might be one way to invalidate a cache, it doesn't seem like the only way. E.g. a hypothetical implementation that counted its tasks could simply compare its current task number against the task number in the cache, and learn whether to refresh the cache that way. No microtask needed. |
OK so ring buffer and task number would do the trick in that case.
In this case should extra IPC be added for the sake of sync API or would the sync API just need to return frequently updated information that may be less frequent than each audio chunk? |
I'd like us to unblock this issue. Even if I don't like accessing off-thread information and caching it, both |
There is a lot of implementation freedom (mutex, ring buffer, or periodically updated internal slot) as long as the exposed information only has to be "relatively recent" |
Does the lockless ring buffer block the producer if the consumer has not read from it yet? If we're talking about something like this and its performance remarks |
Disagree, it's trivial in this day and age to communicate an arbitrary number of items in a coherent manner, lock-free, when the data is in the same address space. It's not very hard to do this across processes either.
There's already, by definition, communication associated with the data of those tracks. Adding metadata to this data is ~free, the data is either:
Adding IPC traffic for something that is already free to send to the relevant processes or in fact even already being sent to the processes is not a good solution. The solution become expensive when this data is being requested, and is free otherwise. The alternative proposal (a sync API) is always free. |
You also need latency values when running e.g. an echo canceler. It needs to know about the real latency, including any buffering at the application level. You need output latency when playing any video content that has sound, for A/V sync purposes. You need to know about video frame drops to communicate this to js, so that it can observe it and potentially serve you different media. It's not all about giving potentially late and infrequent numbers to content. Looking at the links of real product and the workshop talk I've mentioned above, it's clear that it's important for application developers to get precise and reliable information for their app to work. As you say, it's possible to update those counters rarely, but why bother? The information is there, it's free, you can update them at the correct rate, and it solves real problems. |
There is no "refresh rate", this is about real-time update of metadata related to data used by content processes. There's no risk of web compat issues, because the numbers represent what's actually happening.
Firefox will set some of the proposed numbers to 0, always, because its architecture prevents some problems that require some of those metrics, and it's fine. This is not about "preemptively sending data" here, it's about adding metadata to data that already need to be in content processes, so they can be exposed.
async is always more complex than sync, and by definition the data is old when it's received by the content process in the case of async method calls. From a browser point of view, I'd believe that depending on the architecture it's easier to implement the async method, just add a couple cross process method calls, and that's it. This is not the case for all implementations. The important point here is that the priority is in this precise order:
and that it's not particularly hard to send some metadata with the data anyway. |
If you're processing microphone data, there is a
|
The sync API returns the latest available metadata, that it has received alongside the actual data. |
A simpler design for what we're looking at here is triple buffering https://searchfox.org/mozilla-central/source/media/libcubeb/src/cubeb_triple_buffer.h, this is how we implement (and plan to implement, since we haven't implemented all of this) this in Firefox. Additionally, we'll be adding metadata to the data we already communicate across process. But triple-buffering allows fanning out the data to other threads once it's been received via IPC. |
OK. Well then I'm convinced it can be implemented efficiently and that this is just a matter of API shape versus implementation effort. |
MediaStreamTrack is just an opaque object that connects sinks and sources, it is not the object that does processing. Exposing a sync API at MediaStreamTrack level would further push web developers towards doing realtime data processing in main thread contexts. This would be at the detriment of the user. I also still disagree with the fact that a sync API is free, since some of this data is currently not sent to WebProcesses. In the future, optimisations or features like isolated streams might further move away from sending such data to WebProcesses. |
To be discussed at TPAC, but having gathered implementation experience about this API and thinking about it some more, I am no longer concerned about the performance issues. That is, if people see value in having this API be synchronous, I see no reason not to do that. I filed #105 and created a PR. |
The TPAC resolution was to merge #106 making this synchronous, plus filing follow-up issues such as #108 and #107. I may file a separate issue to revisit the "dictionary vs interface" question with regards to GC to make sure this is what we really want, but as things stands the PR that will merge is using [SameObject] interface, and I have code in chromium to update our WebIDL and implementation to match this PR. Are there any other issues from this thread that needs further discussion, such as audio metrics definitions? @padenot @jan-ivar Can we file separate issues about this if more audio-specific discussions are needed? It's hard to keep track of all the discussions in this issue so I'd like to break it down into pieces and close this issue as resolved when #106 merges. |
This issue was mentioned in WEBRTCWG-2023-09-12 (Page 55) |
Closed by #106. |
track.getFrameStats() (getStats after #97) creates a new JS object with properties to be read, adding to the GC pile every time it's called, which may be dozens of times per second, per track.
Feedback from @padenot is that real-time JS apps try to avoid garbarge collection whenever possible, and that allocating an object just to read values seems inherently unnecessary. Other ways to read real-time values exist in the Media WG, which we may wish to consider. E.g. audioContext.outputLatency.
Why it allocates
The existing async API queues a single task when the application requests data, creating a garbage-collectable dictionary to hold the results. In contrast, a synchronous getter, we thought, would require the UA to queue tasks continuously to update its main-thread internal slot, just in case the JS app decides to call it, even if it never does.
Ways to avoid it
The Media WG shows there are other ways.
outputLatency
is synchronous without an internal slot. In Firefox, it uses a lock, but can be implemented without. The Media WG decided to violate § 5.2. Preserve run-to-completion semantics here, which I don't like, but that seems trivial to remedy by caching reads.To further reduce overhead, we could put getters on a dedicated interface instead of on the track itself, and have a lazy getter for that (sub) interface attribute. TL;DR:
An implementation could do a read-one-read-all approach to maintain consistency between values. WDYT?
The text was updated successfully, but these errors were encountered: