Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

framebufferScaleFactor seems difficult to use correctly/consistently #349

Closed
klausw opened this issue Apr 25, 2018 · 11 comments
Closed

framebufferScaleFactor seems difficult to use correctly/consistently #349

klausw opened this issue Apr 25, 2018 · 11 comments

Comments

@klausw
Copy link
Contributor

klausw commented Apr 25, 2018

In addition to being inconsistently documented (see issue #348), I think this setting also seems difficult for application developers to use correctly.

Currently, the application can choose to supply the value 0.0 to use the default scale factor, or it can supply a specific value to replace the default scale factor. However, as far as I can tell there's no way to tell what the default scale factor would have been.

Let's say the goal is for the browser to tune the default so that a moderately-complex application can hit the target framerate on a given device. The specific framebuffer scale needed for this will depend on the device characteristics and potentially also OS and browser version.

As an example, current Daydream headsets use a default framebufferScaleFactor of 0.5 on Android N, and 0.7 on Android O due to a more efficient render path there. Going forward, this may well be subdivided further, for example a Pixel 2 could use a default of 0.8 due to its fast GPU and comparatively low screen resolution, while a first-generation Pixel XL would use 0.6 due to a slower GPU and higher screen resolution.

By contrast, on a Windows headset, there's likely to be a system-provided recommended scale, where using a framebufferScaleFactor of 1.0 would mean to use this unchanged. An application with simple graphics may want to use a higher scale factor, but this doesn't seem to be supported by the current spec which assumes it's a value from 0 to 1.

How is a developer supposed to choose an appropriate setting for their application? In the current system, if the developer sets framebufferScaleFactor=0.6 on an Android device, this would be higher-than-default on some devices and lower-than-default on others, and that's unhelpful if the defaults were tuned to get approximately equivalent performance for typical content.

To work around this, the developer would either need to infer the default from experimentally created sessions, or maintain a database of device/OS/browser version to use for tuning. Both of these seem unpleasant.

I think it would be far more helpful and intuitive to supply a relative scale factor that defaults to 1.0 and that applies on top of the system-selected default framebuffer scale. That way, applications with simple graphics compared to a baseline "moderately-complex" one could use values greater than 1.0, or a smaller value for complex rendering.

A relative scale would also work well with lower-level tuning such as SteamVR's automatic resolution adjustment, where the default 1.0 value would be pre-tuned to match the current system's GPU performance.

The low-level framebufferScaleFactor could potentially still be exposed to applications if it's considered helpful for specific use cases, but I think a relative scale would work better.

@klausw
Copy link
Contributor Author

klausw commented Apr 25, 2018

Forgot to add, I don't think it's a good solution to just use a high framebufferScaleFactor and tune performance via requestViewportScaling. Unused pixels still cost memory, and depending on the render pipeline there is also a performance cost for them. That's especially an issue if the largest-supported framebuffer size is much larger than a typical size.

Also, viewport scaling doesn't solve the underlying issue of not knowing how a given app-selected framebuffer scale relates to the default framebuffer scale, so picking a big-enough framebuffer would still be guesswork.

@sjpt
Copy link

sjpt commented Apr 25, 2018

I'm pretty sure I agree with klausw; but we have used a small range of devices (almost always Vive, Windows and 1070 or 1080) so I am not sure how it would play out in the wider world.

We have been using a relative scale similar to that Klaus is suggesting for some time (implemented in a tweak to three.js for us). It worked well till SteamVR added the automatic resolution adjustment; that broke the appropriate value. However, it seems that with a change we have made to relative scale value the Steam change should make it work over a wider range of devices. (Our value needs to be below 1 as we have a very complex scene.)

I am sure our relative value will need to change as time moves forward. In the past we have been able to increase it a couple of times; (a) when we improved our rendering efficiency and (b) when improved asynchronous reprojection in SteamVR allowed us to get away with a lower frame rate. I guess that (a) would need change to our relative scale, but that the implementers of SteamVR would change recommended values if they made improvements similar to (b).

@klausw
Copy link
Contributor Author

klausw commented Apr 26, 2018

Based on Brandon's offline comments, it sounds as if the framebufferScaleFactor is supposed to be 1.0 for 1:1 pixel mapping, with both larger and smaller values being possible.

If there were a way to retrieve the default/recommended framebufferScaleFactor before starting presentation, I think that would make it possible for applications to apply a relative scale to that.

For consistency, we'd also need to make sure that for example OpenVR returns an appropriate value that matches the current supersampling setting if it's defaulting to greater-than-1:1 pixel ratio. There's a SupersampleScale_Float mentioned in the API docs, but it doesn't really explain how that works.

@sjpt
Copy link

sjpt commented Apr 27, 2018

Comment based on looking into this a bit more.
Looking in the webxr spec I can't find any query capability on the device at all (eg even equivalent to WebVR device.getEyeParameters, but preferably with more information). Am I missing something here???? Poking in a sample application (https://immersive-web.github.io/webxr-samples/room-scale.html) didn't tell me much either, but seemed to confirm no query option on the device.

Sorry if I am being silly here. If so, please tell me. If not I'll make this a new thread.

More direct response to Klaus's comment.

Even 1::1 pixel mapping is slightly unclear, because of the warping applied.  Does this mean the ratio between the application framebuffer and device framebuffer (eg 1.0 gives 1200x1080 for old Vive), or relate to the framebuffer size needed to ensure that the most distorted pixel maps to a single pixel on the device (eg 1680x1512 for old Vive).  I assume the latter as that seems more in keeping with what we have seen in the SteamVR implementation to date, but should be made explicit.

I don't think exactly what the application controls is important as long as (a) it is given or can deduce all possibly relevant information available to make its decision and (b) it is given sufficient control to inform the WebXR layer of that decision.  

This is at least some of the information an application may need to know (from the ?missing? query at the top of this post)
* name of the device (manufacturer, ...?)
* real resolution of the device
* resolution needed to ensure most distorted pixel maps 1::1
* device fps (ideal, maximum, minimum, ...???)
* recommended resolution
* recommended (default) scale factor


As Klaus has said, application setting a scaling factor relative to 1::1 (whichever 1::1 we mean) is fine as long as the recommended (default) scale factor is available.  I think ALL the information above should consistently be made available through the interface because there will be application specific details that could override 'default' decisions.  e.g. text requirements may mean the application is willing to accept more jitter for higher resolution; or conversely being informed of the resolution will help the application decide on most appropriate font sizes.  

@toji
Copy link
Member

toji commented Apr 27, 2018

@sjpt: You're not wrong. We've been aggressively scaling back on how much information about the headset we expose. There's a few of reasons for this (reducing fingerprinting, reducing API complexity, etc.) but the big motivation is prevent applications from "misbehaving" by simply to not exposing data that isn't necessary to use the device but would be easy to abuse.

We can expose the device name, but what will you do with that info? You can show it to the user, but they already know what device they have. You can use it for fingerprinting or stats gathering, but we explicitly DON'T want that. You can use it to selectively exclude devices from working with your app, but again that's something we don't want. We'd rather you make a best effort to work everywhere and when exclusion is necessary it's done based on capabilities not name strings. (This is something we saw happening in real world apps with WebVR.)

You can make similar arguments for most of the items in your list: We'd prefer not to report resolutions up front because we're actively preventing you from allocating buffers directly. We saw it done wrong VERY frequently with WebVR. So now WebXR handles it behind the scenes, and gives users a harder-to-screw-up quality/perfomance knob in the form of a scale. (We do tell you the allocated buffer size after the fact, so you can infer that if you request a 1.0 scale and it gives you back a 3000x2000 buffer that's the 'native' resolution.)

That said, I'm happy to consider any case where a missing piece of data prevents reasonable API use, like the difficulty using the framebufferScaleFactor Klaus reported in this issue. In that regard, I think that exposing the default framebuffer scale factor prior to allocating a layer sounds reasonable. If you have an app that you know is generally fillrate-bound and want to turn down the default buffer size a bit, it would be hard to do accurately without it. You could pick an arbitrary scale (0.8) that would scale down on most desktop systems but actually scale up a bit on Daydream as we have it configured now. let scaleFactor = defaultFramebufferScaleFactor * 0.8; would be much more productive.

In terms of how to expose it, sticking this property on the XRSession would be the easy route, but we'd want it to be defaultWebGLLayerFramebufferScaleFactor (ugh) at that point, because it may not apply to future layers. I'd like it to be attached to the XRWebGLLayer more directly so the data is better localized, but current design is that the layers scale factor is set at creation time, which is a property I'd like to preserve, and I'm not sure what web ergonomics norms says about "static functions" like XRWebGLLayer.getDefaultFramebufferScaleFactor(session). I'll look into that.

Also, for personal reference (or maybe Bill's): The property Klaus mentioned appears to be queried like so:

vr::IVRSettings* settings = vr::VRSettings();
float supersampleScale = settings->GetFloat(k_pch_SteamVR_Section, k_pch_SteamVR_SupersampleScale_Float, 1.0 /* Default */);

I still don't see anything that describes what, exactly, that value means. It doesn't seem unreasonable that it represents the amount of supersampling the system will apply, though. Guessing that if we divide the buffer resolution that OpenVR gives back by that value we'll get the "native" resolution.

@toji
Copy link
Member

toji commented Apr 30, 2018

Checked with our local API ergonomics guru, @bfgeek, and it looks like we can use static interface functions to achieve this, similar to Notification.requestPermission()

So my proposal for addressing this issue would be to extend the XRWebGLLayer like so:

partial interface XRWebGLLayer {
  static double getDefaultFramebufferScaleFactor(XRSession session);
}

Used like so:

function createSlightlySmallerLayer(session) {
  let scaleFactor = XRWebGLLayer.getDefaultFramebufferScaleFactor(session);
  return new XRWebGLLayer(session, gl, { framebufferScaleFactor: scaleFactor * 0.8 });
}

Thoughts?

@klausw
Copy link
Contributor Author

klausw commented Apr 30, 2018

I think this sounds like a good solution. The fingerprinting issue seems minimal as long as the default scale is reported with reasonably coarse granularity. (If for example it were derived from benchmarking that depends on individual system characteristics, it could be rounded to an even multiple of 0.05.)

@toji
Copy link
Member

toji commented May 1, 2018

I see fingerprinting as something of a non-issue here for a few reasons:

  • You'll need a valid session to query this value.
  • Magic Window sessions will generally use a default scale factor of 1.0 regardless of device.
  • For anything more interesting there's going to be at least a user gesture barrier.

@Artyom17
Copy link

Artyom17 commented May 1, 2018

This solution sounds pretty good to me. Another issue we've touched on WebXR call is clamping: should it be done by each implementation implicitly? Like, if I request scalefactor = 10 and the browser knows it is way too high for this HW, should it just silently use the implicit max value (let say 1.5) instead?

@toji
Copy link
Member

toji commented May 15, 2018

Wanted to make sure I left some notes on here to cover what we discussed on the last call.

Microsoft raised the concern that having 1.0 == 1:1 native pixels may be untenable as displays get higher resolution because it could end up that utilizing a 1:1 resolution naively would require advanced techniques like foveated rendering to be at all performant. The concern then is that if developers just slap a 1.0 in as the scale it may work for their current hardware but fail down the road.

The alternative suggestion was to make 1.0 the "recommended" resolution in all cases, even when it's not 1:1, and have a way to query what that the scale should be to get a 1:1 ratio. This does a couple of things: It makes it easy for developers to do minor tweaks up and down in quality. (1.1 is slightly scaled up, 0.9 is slightly scaled down no matter what) and it makes developers take an extra step if they want to blindly slap the full native res in there, which acts as a very light deterrent.

After giving this some thought since the previous call I feel like this is a good path forward that feels more predictable for users, easier to cleanly document, and offers more flexibility to implementations down the road. I'll put together a pull request to make the change and give us a chance to comment on what the actual API would look like.

@toji
Copy link
Member

toji commented May 18, 2018

Should be resolved by #353.

Also, since I failed to answer the question @Artyom17 asked previously: Yes, clamping should be done by each implementation implicitly, based on whatever metrics are appropriate for the device. Clamping to something like 1.5 sounds totally reasonable. I could also imagine some hardware may simply not allow you to change the fraembuffer scale for whatever reason, so clamping to a min and max of 1.0 would be a valid thing to do as well.

@toji toji closed this as completed May 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants