Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How is adaptive frame rate to be handled? #193

Closed
jan-ivar opened this issue Jun 16, 2015 · 10 comments
Closed

How is adaptive frame rate to be handled? #193

jan-ivar opened this issue Jun 16, 2015 · 10 comments

Comments

@jan-ivar
Copy link
Member

(Asking here first, hoping this is covered)

Assume a hires camera does 30 fps in good light, but drops to 10 fps in poor lighting, in all its modes. It doesn’t tell us what the current lighting conditions are, I think.

Issues:

  • What should MediaCapabilities show for frameRate?
  • What should MediaSettings show for frameRate?
  • Should getUserMedia({ video: { frameRate: { min: 20 } } }) succeed?
  • Should getUserMedia({ video: { frameRate: { max: 20 } } }) succeed?
  • Can overconstrained event ever fire due to a change in lighting?
  • What is the fitness distance of this camera for frameRate x?
@alvestrand
Copy link
Contributor

My immediate thought is that this is a case of "variation within a permitted range".

MediaCapabilities should show 10-30 (unless the drivers support downsampling, in which case it might be 0-30).
MediaSettings should return what the camera's drivers are currently giving. (I assume this is observable)
getUserMedia({video: { frameRate: { min: 20 }}}) should succeed if the camera's presently sending 30.
If the drivers don't support downsampling, getUserMedia({video: {frameRate: {max:20}}}) should fail if the camera's currently sending 30.

If min:20 has been set, and the camera drops to 10, the overconstrained event should fire.

If the application is actually capable of operating usefully at 10 fps, this example proves that it's a Bad Idea to specify a hard limit, which is why we have Ideal.

(At the moment, fitness distance is NOT reevaluated when rates change, so having a fitness distance computed to what the camera's currently producing is a good fit.)

@jan-ivar
Copy link
Member Author

MediaCapabilities should show 10-30 (unless the drivers support downsampling, in which case it might be 0-30).

Agree.

getUserMedia({video: { frameRate: { min: 20 }}}) should succeed if the camera's presently sending 30.

At the time gUM is called the camera isn't necessarily sending anything, is it? Are you saying how the camera would perform under present lighting conditions is observable without turning on the camera (and the light)?

MediaSettings should return what the camera's drivers are currently giving. (I assume this is observable)

Is a live measurement expected here (and if so over what amount of time), or is there some internal driver attribute I'm missing? In this firefox fiddle which measures it in JS over time the frame rate seems to fluctuate a little bit.

@alvestrand
Copy link
Contributor

Not surprised you get fluctuations; in other experiments with JS timing I've observed 10-15 ms delay variation just from the simple "wait a bit" function. On my computer, the measurement is between 24.7 and 25.2 fps; if I turn off the light above my head, I get between 12 and 13.

If the drivers can't tell what the camera is currently sending, something sensible has to happen; if the app were to open the camera at the asked-for frame rate, and then observe the frame rate and behave accordingly (as described above), I would call that sensible - if opening a variable-rate camera in a dark room with frameRate: { min: 30 }, this will cause an instant overconstrained event - but that's what you asked for, so you should get it.

@jan-ivar
Copy link
Member Author

jan-ivar commented Jul 3, 2015

That makes a lot of sense, but when I think about it doesn't seem very stable or deterministic.

if opening a variable-rate camera in a dark room with frameRate: { min: 30 }, this will cause an instant overconstrained event - but that's what you asked for, so you should get it.

OK, but I didn't get what I wanted at all, and what would I do next? If I ask for frameRate: { min: 30 } again, wont the same thing happen? Or should the UA remember that that didn't work the last time and try other modes instead (even if the lighting conditions may have improved, which it cannot know)? Am I to experiment with lower widths and heights? This seems imprecise and unproductive.

The camera may have other lower resolution modes that actually could do a minimum of 30 frames all the time, even in low light, but they may be over-shadowed by this adaptive mode that's overselling its capabilities and therefore receives an overly-optimistic fitness distance. If I used high ideal-values for width and height, which is likely, then the lower resolutions wont stand a chance.

(If there are multiple cameras, I may also have bothered the user for the wrong one, and potentially have to bother them again, though I'm the first to admit that picking cameras based on frame rate is silly).

I think we have to try the opposite, to undersell capabilities. E.g. frameRate: { min: 30 } would only be satisfied by cameras that can sustain 30 fps all the time, regardless of lighting conditions, and not our OP camera.

To reach our OP camera I would have to use frameRate: { min: 10, ideal: 30, max: 30 }. Coupled with high ideal-values for width and height, that should give me the right one.

This is may seem extremely conservative, but it's more deterministic, and doesn't have the problem of good modes being overshadowed by modes that under-deliver, or the problem of overconstrained firing in response to lighting changes.

@jan-ivar
Copy link
Member Author

TL;DR; I believe min and max must only consider frame rates that can be sustained. I.e. constraint.min <= capability.min && capability.max <= constraint.max), or this wont work.

(where capability is a single considered camera mode aka settings dictionary in the spec.)

@fluffy
Copy link
Contributor

fluffy commented Aug 27, 2015

Most apps should probably not set a mandatory min frame rate. But if I am doing a machine vision application that requires over 100 fps and I set that in the constraints, I want the application to get an error / notification if the camera drops below 100 fps.

@jan-ivar
Copy link
Member Author

@fluffy You get an error, and then what? Wait? I think you want a lower-res mode that can sustain 100 fps even in low light that wont fail on you.

What I'm trying to say is it'll likely come down to a choice between a high-res mode that can do sunny-day 100 fps, and a lower-res one that can do 100 fps sustained even in low light. An important decision.

My problem with what Harald proposes is it leaves no way to distinguish between these two modes (*), which means you'll always get the former and never the latter. You wont even know the second mode exists / is better fps-wise. That seems bad.

*) Unless one goes slumming for lower-resolution modes, in hopes they'll fare better on fps, a terrible API.

@alvestrand
Copy link
Contributor

We seem to have a real disagreement on what the right thing to do here is.
Taking it to the list.

@aboba
Copy link
Contributor

aboba commented Oct 22, 2015

#263 indicates that the overconstrained event can fire if the frameRate constraint cannot be satisfied. If that is undesirable, then the application developer should not set the constraint.

@alvestrand alvestrand assigned aboba and unassigned alvestrand Nov 12, 2015
aboba added a commit that referenced this issue Nov 24, 2015
#193

In addition to adding text to illustrate frameRate constraints, this PR ensures correct spelling of "frameRate" throughout the document.
@Kamelia2000
Copy link

hello i have two important question.I appreciate someone help us in this case.
1- How calculate Input-FPS inside Webrtc?
2- How possible to understand part of source code related API call for Encoding?

Thanks,
Kamelia

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants