New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
framebufferScaleFactor seems difficult to use correctly/consistently #349
Comments
|
Forgot to add, I don't think it's a good solution to just use a high framebufferScaleFactor and tune performance via requestViewportScaling. Unused pixels still cost memory, and depending on the render pipeline there is also a performance cost for them. That's especially an issue if the largest-supported framebuffer size is much larger than a typical size. Also, viewport scaling doesn't solve the underlying issue of not knowing how a given app-selected framebuffer scale relates to the default framebuffer scale, so picking a big-enough framebuffer would still be guesswork. |
|
I'm pretty sure I agree with klausw; but we have used a small range of devices (almost always Vive, Windows and 1070 or 1080) so I am not sure how it would play out in the wider world. We have been using a relative scale similar to that Klaus is suggesting for some time (implemented in a tweak to three.js for us). It worked well till SteamVR added the automatic resolution adjustment; that broke the appropriate value. However, it seems that with a change we have made to relative scale value the Steam change should make it work over a wider range of devices. (Our value needs to be below 1 as we have a very complex scene.) I am sure our relative value will need to change as time moves forward. In the past we have been able to increase it a couple of times; (a) when we improved our rendering efficiency and (b) when improved asynchronous reprojection in SteamVR allowed us to get away with a lower frame rate. I guess that (a) would need change to our relative scale, but that the implementers of SteamVR would change recommended values if they made improvements similar to (b). |
|
Based on Brandon's offline comments, it sounds as if the framebufferScaleFactor is supposed to be 1.0 for 1:1 pixel mapping, with both larger and smaller values being possible. If there were a way to retrieve the default/recommended framebufferScaleFactor before starting presentation, I think that would make it possible for applications to apply a relative scale to that. For consistency, we'd also need to make sure that for example OpenVR returns an appropriate value that matches the current supersampling setting if it's defaulting to greater-than-1:1 pixel ratio. There's a SupersampleScale_Float mentioned in the API docs, but it doesn't really explain how that works. |
|
Comment based on looking into this a bit more. Sorry if I am being silly here. If so, please tell me. If not I'll make this a new thread. |
|
@sjpt: You're not wrong. We've been aggressively scaling back on how much information about the headset we expose. There's a few of reasons for this (reducing fingerprinting, reducing API complexity, etc.) but the big motivation is prevent applications from "misbehaving" by simply to not exposing data that isn't necessary to use the device but would be easy to abuse. We can expose the device name, but what will you do with that info? You can show it to the user, but they already know what device they have. You can use it for fingerprinting or stats gathering, but we explicitly DON'T want that. You can use it to selectively exclude devices from working with your app, but again that's something we don't want. We'd rather you make a best effort to work everywhere and when exclusion is necessary it's done based on capabilities not name strings. (This is something we saw happening in real world apps with WebVR.) You can make similar arguments for most of the items in your list: We'd prefer not to report resolutions up front because we're actively preventing you from allocating buffers directly. We saw it done wrong VERY frequently with WebVR. So now WebXR handles it behind the scenes, and gives users a harder-to-screw-up quality/perfomance knob in the form of a scale. (We do tell you the allocated buffer size after the fact, so you can infer that if you request a 1.0 scale and it gives you back a 3000x2000 buffer that's the 'native' resolution.) That said, I'm happy to consider any case where a missing piece of data prevents reasonable API use, like the difficulty using the In terms of how to expose it, sticking this property on the Also, for personal reference (or maybe Bill's): The property Klaus mentioned appears to be queried like so: vr::IVRSettings* settings = vr::VRSettings();
float supersampleScale = settings->GetFloat(k_pch_SteamVR_Section, k_pch_SteamVR_SupersampleScale_Float, 1.0 /* Default */);I still don't see anything that describes what, exactly, that value means. It doesn't seem unreasonable that it represents the amount of supersampling the system will apply, though. Guessing that if we divide the buffer resolution that OpenVR gives back by that value we'll get the "native" resolution. |
|
Checked with our local API ergonomics guru, @bfgeek, and it looks like we can use static interface functions to achieve this, similar to So my proposal for addressing this issue would be to extend the partial interface XRWebGLLayer {
static double getDefaultFramebufferScaleFactor(XRSession session);
}Used like so: function createSlightlySmallerLayer(session) {
let scaleFactor = XRWebGLLayer.getDefaultFramebufferScaleFactor(session);
return new XRWebGLLayer(session, gl, { framebufferScaleFactor: scaleFactor * 0.8 });
}Thoughts? |
|
I think this sounds like a good solution. The fingerprinting issue seems minimal as long as the default scale is reported with reasonably coarse granularity. (If for example it were derived from benchmarking that depends on individual system characteristics, it could be rounded to an even multiple of 0.05.) |
|
I see fingerprinting as something of a non-issue here for a few reasons:
|
|
This solution sounds pretty good to me. Another issue we've touched on WebXR call is clamping: should it be done by each implementation implicitly? Like, if I request scalefactor = 10 and the browser knows it is way too high for this HW, should it just silently use the implicit max value (let say 1.5) instead? |
|
Wanted to make sure I left some notes on here to cover what we discussed on the last call. Microsoft raised the concern that having 1.0 == 1:1 native pixels may be untenable as displays get higher resolution because it could end up that utilizing a 1:1 resolution naively would require advanced techniques like foveated rendering to be at all performant. The concern then is that if developers just slap a 1.0 in as the scale it may work for their current hardware but fail down the road. The alternative suggestion was to make 1.0 the "recommended" resolution in all cases, even when it's not 1:1, and have a way to query what that the scale should be to get a 1:1 ratio. This does a couple of things: It makes it easy for developers to do minor tweaks up and down in quality. (1.1 is slightly scaled up, 0.9 is slightly scaled down no matter what) and it makes developers take an extra step if they want to blindly slap the full native res in there, which acts as a very light deterrent. After giving this some thought since the previous call I feel like this is a good path forward that feels more predictable for users, easier to cleanly document, and offers more flexibility to implementations down the road. I'll put together a pull request to make the change and give us a chance to comment on what the actual API would look like. |
|
Should be resolved by #353. Also, since I failed to answer the question @Artyom17 asked previously: Yes, clamping should be done by each implementation implicitly, based on whatever metrics are appropriate for the device. Clamping to something like 1.5 sounds totally reasonable. I could also imagine some hardware may simply not allow you to change the fraembuffer scale for whatever reason, so clamping to a min and max of 1.0 would be a valid thing to do as well. |
In addition to being inconsistently documented (see issue #348), I think this setting also seems difficult for application developers to use correctly.
Currently, the application can choose to supply the value 0.0 to use the default scale factor, or it can supply a specific value to replace the default scale factor. However, as far as I can tell there's no way to tell what the default scale factor would have been.
Let's say the goal is for the browser to tune the default so that a moderately-complex application can hit the target framerate on a given device. The specific framebuffer scale needed for this will depend on the device characteristics and potentially also OS and browser version.
As an example, current Daydream headsets use a default framebufferScaleFactor of 0.5 on Android N, and 0.7 on Android O due to a more efficient render path there. Going forward, this may well be subdivided further, for example a Pixel 2 could use a default of 0.8 due to its fast GPU and comparatively low screen resolution, while a first-generation Pixel XL would use 0.6 due to a slower GPU and higher screen resolution.
By contrast, on a Windows headset, there's likely to be a system-provided recommended scale, where using a framebufferScaleFactor of 1.0 would mean to use this unchanged. An application with simple graphics may want to use a higher scale factor, but this doesn't seem to be supported by the current spec which assumes it's a value from 0 to 1.
How is a developer supposed to choose an appropriate setting for their application? In the current system, if the developer sets framebufferScaleFactor=0.6 on an Android device, this would be higher-than-default on some devices and lower-than-default on others, and that's unhelpful if the defaults were tuned to get approximately equivalent performance for typical content.
To work around this, the developer would either need to infer the default from experimentally created sessions, or maintain a database of device/OS/browser version to use for tuning. Both of these seem unpleasant.
I think it would be far more helpful and intuitive to supply a relative scale factor that defaults to 1.0 and that applies on top of the system-selected default framebuffer scale. That way, applications with simple graphics compared to a baseline "moderately-complex" one could use values greater than 1.0, or a smaller value for complex rendering.
A relative scale would also work well with lower-level tuning such as SteamVR's automatic resolution adjustment, where the default 1.0 value would be pre-tuned to match the current system's GPU performance.
The low-level framebufferScaleFactor could potentially still be exposed to applications if it's considered helpful for specific use cases, but I think a relative scale would work better.
The text was updated successfully, but these errors were encountered: