-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should the API allow setting both samplingFrequency and reportingFrequency? #209
Comments
I'm fine either way. We'd need to preserve the ids though. This might also create a bit of downstream churn. Do you want to also rename "reading" as "sample" while you're at it? |
I would keep "reading". |
The 'reading' term is commonly used in platform APIs. I would keep it +1 As for 'sampling', maybe we can use 'reporting' frequency? Because that is what {frequency: Hz} is doing at the moment. I added some info that might be relevant in #198 Otherwise, we would need something like Is it important to know sampling frequency? For example, quite many motion sensors have 2 main modes:
In 'full' mode, developer can select at which rate data would be delivered, or poll at max possible rate. I think I saw few specs where max output rate < max sampling rate, HW samples data at max kHz rates, then filters / calibration / compensations take some time, and you get much lower output rate after that. Maybe it would be better to use same terms in GenericSensor API? HW sensor will 'report' readings at frequency that is optimal for all Sensor instances. Sensor instance will 'report' about reading change events according to it's own frequency. What do you think? |
@alexshalamov what you're bringing up here is precisely what I was hinting at in #198. I 100% agree we need to distinguish between sampling and reporting frequencies. We have to figure out how they interact, however. Are the two disconnected? That is: do updates to the shared memory trigger ping each sensor which then decide whether they want to emit an event or not, or does each sensor's internal clock poll the shared memory at their own pace? (I would imagine the former is the right strategy.) Then we need to figure out quite precisely how the user agent should react to one of these two or both options missing. And also to absurd options like: let options = { samplingFrequency: 60, reportingFrequency: 120 }; // INVALID Similarly, can we imagine something like: let options = { samplingFrequency: 240, syncReportingToAnimationFrame: true }; I wasn't aware of the low-power / full-mode terminology, but that's the behavior I was hinting at with the periodic reporting mode vs. implementation specific mode. I'm glad we're finally getting there. Would be good to figure out how we want to expose this so as to try to make the default approach the most battery friendly possible. |
@tobie Sometimes it is impossible to change sampling frequency, so, developer have only option to use 'reporting' frequency. HW sensor might be used for:
There might be no way to set '240Hz' sampling frequency, as in your example. |
This is precisely the distinction I've been trying to convince everyone to make between periodic reporting mode sensors and implementation specific reporting mode sensors since pretty much forever (I agree the name might have been a misnomer but anyway).
Absolutely, but specifying a reporting frequency for a sensor that's operating in low-power mode is sort of absurd. On the other hand, not being able to set the sampling frequency for a gyroscope is going to prevent a whole bunch of use cases from working properly. Hence you need to be able to handle all these case, deal with sensors that don't support some of them appropriately (e.g. fire an error event when a sampling frequency is requested on a sensor that only supports low power mode, as I originally spec'ed, etc.), and make the interface/option setting intuitive enough that developers actually do the right thing most of the time. :) |
I was talking about, normal mode of operation, not low-power one.
Some sensors only provide way to set output data rate, even when it samples at 8kHz, like 6DoF sensors. Maybe we can keep simpler concept at this time? {frequency: Hz} would be the rate at which data is delivered = ODR in HW sensor terms. We can spec it as 'reporting' frequency. |
Not objecting to 'reporting frequency'. On the other hand 'reporting frequency' is 'sampling frequency' for a Sensor object, isn't it? :) |
That's sort of what I was getting at in a longer answer to @alexshalamov. Whether the sensor is polled at 120 Hz or it operates at 8KHz and spits out data to the user agent at 120 Hz is an implementation detail from the web developer's perspective. |
For JS Sensor object, kind of yes :) |
Agree, what the web developer cares is 'reporting/sampling/whatever frequency' (how often data can be obtained from a Sensor object) + latency |
True but we're only interested how it can affect a client on JS side. |
So I would not define too much HW-related details in the spec but rather how the Sensor interface should work |
Agree, especially when some HW platforms might not even allow setting actual 'sampling frequency'. |
So this is a rather large debate I'd like to at least loop @rwaldron and @kenchris in. The whole premise of this new API design for sensors has been to provide Web developers with means to more directly interact with the underlying HW sensors, following the precepts of the Extensible Web Manifesto. If we're now instead deciding that we don't really want to give developers lower-level access, and instead only offer them the ability to hint at getting low latency, we're completely failing those principles. Developer who want to muck with sensors at such a low level know what they're doing and will want to control the sampling frequency at the HW level. They won't really care about the reporting frequency outside of handling the data reporting somehow. So agreeing on this is absolutely critical. |
Even platform APIs do not allow this. I don't think we should set such goals for JS API.. |
I think we're running around calling different things with different names and that makes the whole conversation very difficult. For the sake of this argument, let's say that setting a Now let's agree that setting In contrast, setting Now, how this My understanding is that is roughly the contract provided at the platform level. |
So this boils down to #209 (comment) : reporting frequency + desired latency, isn't it? |
Plus batching, and also, rAF syncing (implicitly, from the above; when devs ask for a reporting mode of 60 Hz, that's what they're implying). But overall, I would say yes. The hard part is exposing this in a good way to developers. |
|
My feeling is that is absolutely not what developers would expect these things would do. And you'll end-up with folks cranking up frequency as high as they (thinking it's the polling frequency) and the complain that the API is shit.
Does it really? I thought these referred to the polling period (1/f). |
I meant it allows to pass same two things : sampling period (not polling period!!) which is "The rate sensor events are delivered at" i.e. reporting frequency in our terms + reporting latency hint "Maximum time in microseconds that events can be delayed before being reported to the application." |
I don't understand the distinction between these two terms. :( |
The "polling period" term implies polling is happening which might not be the case. "Sampling period" is more generic term, basically means that we've got new readings but we do not specify how we got them. |
I tried to explain that in this comment. HW sensor samples some physical property at 'sampling frequency'. Polling is an application level concept, developer can create repeating timer and poll data from sensor buffer, access on demand without timer, or wait for interrupt from interrupt controller and get readings at output data rate frequency. |
So outside of the terminology issue between polling and sampling, which I think is a red herring, I still don't think I agree with your assessment if we use the terminology I outlined above:
Would you agree that: Android's and: Android's given |
Cool, so you're both talking about the same thing, and we all agree it's an implementation detail, hence the term polling is inadequate and I should stop using it. You've convinced me on that front. :) So sampling it is! Would love to see if we have the same understanding of #209 (comment), now. :) |
Sorry for the delay. Android's maxReportLatencyUs is used as described below: "The events can be stored in the hardware FIFO up to maxReportLatencyUs microseconds. Setting maxReportLatencyUs to a positive value allows to reduce the number of interrupts the AP (Application Processor) receives, In our case, I propose the following terminology: sampling frequency is the frequency at which UA obtains sensor readings from the underlying platform (but these readings are not necessarily exposed to JS at the same rate). reporting frequency is the frequency at which At the moment, by setting {frequency: .. Hz } the user actually sets the desired Reporting latency vs sampling frequency in options In Chromium (where JS runs in a different process) we use shared buffer to deliver sensor readings from platform side to JS. JS thread reads from this shared buffer at arbitrary moments causing some extra latency. Increasing of sampling frequency (i.e. the frequency at which sensor readings are written to the shared buffer) can reduce this extra latency, but it might not be the case for other UAs (in theory sensor notifications from platform could be received right in the JS thread). Therefore giving a |
@pozdnyakov +1 for Also, since sensor HW can be shared by multiple platform components + HW might not support setting of actual |
Thanks for your explanation about Agreed wrt to sampling frequency and reporting frequency terminology.
That's true of the Chromium implementation, not of the spec. And I think it would be a violation of the principles of the extensible web manifesto to move towards this model, and would provide little benefit over the existing APIs (
Yeah, agreed this latency is implementation specific.
You just defined sampling frequency above as "the frequency at which UA obtains sensor readings from the underlying platform", this redefinition here is implementation-specific and muddles the conversation.
That's not the latency I'm interested about. The latency we are considering here is the one that is caused by the frequency at which the underlying sensor is sampled. Not the one induced by the specificities of the implementation (which I'm calling PIPELINE_LATENCY below). If a sensor is sampled at 1Hz, i.e. every second, then there will be a latency between when the measure is taken and when the measure can be displayed on screen that can be up to PIPELINE_LATENCY + 1000ms. If the same sensor is sampled at 100Hz (and can deliver fresh readings at this pace), then the max latency will be: PIPELINE_LATENCY + 10ms. At 1 kHz, max latency will be PIPELINE_LATENCY + 1ms. This sampling frequency-induced latency is not implementation specific and is the one we want to be focusing on in the spec.
So that's true of PIPELINE_LATENCY, but not of the sampling-rate induced latency, which (again) isn't platform specific. On the other hand, and while that works for polled sensors, I'm not sure how that works for interrupt-based sensors. We might want to discuss this (and might end up having to lean towards a similar solution as the "hint" one you suggest below, but it we do, it will be because of interrupt vs. polling, and not because of some implementation specific concerns at the UA-level.)
As mentioned above, I don't think a "latency hint" is the appropriate solution. "Hint" hints at non-normative, quality of implementation issues, which traditionally have left Web developers hanging, so I'm generally sceptic of them. Nevertheless, I'd be keen to hear a more precise proposal here. What would this API look like, how normative would the prose around it be, etc. |
This doesn't seem to be an issue on either Android or iOS (but I might be wrong). Can you point out platforms where this actually is an issue. |
@tobie @pozdnyakov just an idea, what if we define
That is a basic HW resource sharing problem, I will add more links / references in this comment. |
Sure. But afaik both iOS and Android have managers in software which mitigate that issue by polling at a resolution that satisfies the requirements of all of its clients (within HW bounds, obviously). Put differently, if iOS and Android can handle something like this for apps, there's no reasons we can't offer something similar at the Web level. But again, I think it would be useful, before discussing this hint suggestio further, to get an idea of the related API and normative language around it. |
That's unrelated to web developer intent (at least in the use cases I'm familiar with) and adds substantial cognitive load. It's also not a term I've seen used around the kind of sensors we're planning to expose much. |
@tobie You are absolutely right, what I wanted to address is that, lets say: iOS, Ash in ChromeOS (10Hz) or Windows WM uses Accelerometer to get basic orientation info. The web page that creates
What use-cases you are familiar with that cannot be achieved with oversampling? |
Oh, it's not whether the requirements for the use cases can be met or not, it's whether ther terminology makes sense wrt said use cases. |
What use-cases you are familiar with? I would like to know, so I could help you with understanding / researching relevant areas. |
Basically, what I'm saying is I've never heard this terminology used in the context of MEMS sensors. That's why I don't think it's a good fit. |
To revisit a few points already made above...when registering a listener, the Android API allows defining:
Since the Android API and HAL can be fine tuned for some minimum viable hardware expectation across all supported devices, then it can offer such optional properties to be defined in application code. Similarly, iOS puts limitations on sensor interaction, which we can safely say is based on their own hardware expectations. Web applications that are expected to run in any browser across both Android, iOS, all the desktops, all the laptops and whatever else... won't have such affordances. An application that sets a This strikes me as one of many instance where the goals of this specification are misunderstood. I understand that there are groups that are implementing generic sensor and its derivatives in non-browser JS runtimes. I also understand those implementations might benefit from |
@rwaldron +1 completely agree with you and I tried to explain that @rwaldron There are use-cases when effective output rate of the sensor might need to be higher than 'reportingFrequency'. For example data is updated at 240Hz, while VR or game content is rendered at 60Hz. This will reduce latency for data provided by inertial sensors. What do you think about simplifying that concept for web developers by providing
@tobie There should be publications / papers and many inertial sensors actually work that way. I agree that we should avoid increasing cognitive load and simplify that concept for web developers, thus, I think 'low-latency' flag might be a good start. |
@rwaldron: Having clear terminology that everyone agrees on to refer to both reporting and sampling frequency is already a great step forward. This wasn't the case until now. @alexshalamov, @rwaldron: If we were not to provide a way for web developers to set the sampling frequency, am I correct in assuming that this would be decided by the UA based on reporting frequency and a possible latency flag? So for example, let's say the Web developers requested the following: { frequency: 100, latency: "normal" } Is it fair to assume from this that both the reporting and sampling frequencies would be around 100 Hz? Similarly, if the Web developer now chose: { frequency: 1000, latency: "lowest" } (Before you start saying that this is something ridiculous for developers to do, please bear with me, that's precisely the point: they will do this. Remember the What would the sampling frequency be? Wouldn't that effectively hit the HW limits of the underlying system on a number of devices and put us in a similar solution as if the developer had required a sampling frequency above the supported one? What's the plan in such cases? To deliver 1000 events per second even if we don't have new data? To cap the reporting frequency at the maximum sampling frequency supported by the underlying system? To arbitrarily limit reporting frequency for the lowest common denominator? Finally, imagine we do end up with a { frequency: 240, latency: "lowest", rAFSync: true } Ignore And now what about: { frequency: 240, rAFSync: true } Would frequency be ignored altogether in a window or would it be used to inform the latency? |
We talk about generic use cases and requirements for the Generic Sensor API a lot, yet they are not documented anywhere, see #181. This causes confusion. Concrete sensor specs have their specific use cases documented. |
Well, generic sensor use cases are really the sum of the concrete sensor use cases, so maybe we should just reference those. |
Created an interactive diagram to discuss this issue further. Please add comments about the diagram itself directly in its pull request (#232) and keep only comments related to the issue at hand in this issue. |
If you sync to windows.requestAnimationFrame / VRDisplay.requestAnimationFrame, you could make "frequency" be defined in terms (times, fraction) of rAF. Now that even rAF can be different things, like UA hardware, VR headset, how do we device which one to sync to? If we can answer that, we might also be able to answer what we sync with in a Worker. There is a lot of talk about throttling tabs and workers in Chrome, so this might become important soon. |
Yup. I think we discussed something like this as part of one of the first issues.
Can't agree more. Devil is in the details, and that's particularly true of such an API. These issues will have to be meticulously understood and addressed if we want to truly improve on the |
From #209 (comment) it follows that setting sampling frequency directly from JS API does not look like a feasible option:
So, the JS API just sets the reporting frequency and then the UA requests an appropriate sampling frequency from the underlying platform. The #290 has introduced the related definitions and clarifications to the spec, accordingly to terminology proposed at #209 (comment). |
Sampling frequency is a more generic term. Not all sensors as well as not all the platform sensor frameworks support polling, so the "polling frequency" term does not look quite appropriate for generic sensor API.
The text was updated successfully, but these errors were encountered: