Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding resolutions #39

Closed
KuukoShan opened this issue Sep 13, 2019 · 9 comments
Closed

Regarding resolutions #39

KuukoShan opened this issue Sep 13, 2019 · 9 comments

Comments

@KuukoShan
Copy link

KuukoShan commented Sep 13, 2019

I have a little question that may not be the place to ask but I have no idea how else ask about it (new to GitHub still).

How does ALVR handles resolution on both sides PC and Quest?

I assume that if I set res to 100%, on PC I am rendering the games at the sale resolution of Quest and that's fine.

When I go through tests for latency I have used 50 and even 25% resolutions and I noticed not just less latency but also a performance increment on the games fps itself so I assume that the PC it's rendering also at a 50 or 25% resolution.

But I have also noticed that when I do this, it seems like the Quest internal resolution also changes? Because the little white room on the app on Quest (the waiting to connect room) also changes resolution and that makes me think that the Quest internal resolution it's changing too? Not the screen res itself but maybe the rendering one?

I am wondering this regarding the FFR sceneario.
Let's say that a portion of the image it's 1024px at 100% res then I set res at 50% so now that same portion it's 512px (I am just using numbers as examples, it doesn't matter if they doesn't match :P). If the resolution changes on the video and the Quest rendering it wouldn't matter because they both match. But in the case of using FFR, if I play at 50%, thanks to FFR this same portion would still be 1024px thanks to it but if the internal render on Quest still renders at 50% rather than 100% and stretch the image then we would be rendering that same 1024px portion at only 512 again.

I suposse this is all already though but this is more regarding my curiosity and wanting to learn continuously.

Also, how does exactly FFR works?
Where does the saving happens? On the recording video itself saving encoding time but being sent on the same resolution as if FFR were disabled or it does indeed record at a lower resolution making it not just faster to encode but also less necessary bitrate (like setting it at 75% but looking similar to 100% for example)?

In the case of the video being directly rendered at lower resolution, would we need to apply a different res to take advantage of FFR or this happens automatically? Like, for example, would we need to set it to 75% so we save performance while still looking like 100% or this is automatic and even if we set it at 100% the video will be still encoded at less resolution? Asking because if this is the case it would be cool when we enable FFR to see the equivalent of the final resolution video since there are some settings to adjust there it would be helpful to see how these would affect the final video res (if they do at all).

Of course, if the video is not rendered at a diffrent resolution and it's just save on encoding itself then the last paragraph have no meaning :P

I couldn't try the last build yet so if something of this it's obvious from the new settings there (if any) I am sorry for the redundant questions. Regards♥!

@zarik5
Copy link
Member

zarik5 commented Sep 13, 2019

When I go through tests for latency I have used 50 and even 25% resolutions and I noticed not just less latency but also a performance increment on the games fps itself so I assume that the PC it's rendering also at a 50 or 25% resolution.

You're right, almost. In reality we can only tell SteamVR what resolution we want but ultimately it's each game that decide the rendering resolution. Then we just rescale the image to the resolution we want before transmitting to the client.

But I have also noticed that when I do this, it seems like the Quest internal resolution also changes? Because the little white room on the app on Quest (the waiting to connect room) also changes resolution and that makes me think that the Quest internal resolution it's changing too? Not the screen res itself but maybe the rendering one?

The "connect room" changing resolution is just an unwanted behaviour, but there is no reason to fix it for now.

I am wondering this regarding the FFR sceneario.
Let's say that a portion of the image it's 1024px at 100% res then I set res at 50% so now that same portion it's 512px (I am just using numbers as examples, it doesn't matter if they doesn't match :P). If the resolution changes on the video and the Quest rendering it wouldn't matter because they both match. But in the case of using FFR, if I play at 50%, thanks to FFR this same portion would still be 1024px thanks to it but if the internal render on Quest still renders at 50% rather than 100% and stretch the image then we would be rendering that same 1024px portion at only 512 again.

I implemented FFR so that I begin from an image of the resolution specified on the user interface (let's say 1024px wide), then FFR distorts the image and is scaled down (to lets say 800px) so that the center of the image remains at the same scale but the periphery is squished. Then the image is encoded, transmitted, decoded. Then the image is undistorted from 800px back to 1024px and then displayed. So if you set resolution 75%, on the headset you see the center of the image like 75% res and the rest at lower res.

Where does the saving happens?

FFR on ALVR is useful only to reduce encoding/decoding and transmission latency. It can not be used to save resources when rendering on the PC because it is done by the games, it's up to the games to implement FFR to reduce load on the PC.

In the case of the video being directly rendered at lower resolution, would we need to apply a different res to take advantage of FFR or this happens automatically?

Yes. 100% res becomes ~80% when encoding, transmitting and decoding and back to 100% when rendering on client.

@KuukoShan
Copy link
Author

Oh thanks a lot for the explanation it was very interesting.
Something more tho.
It seems that every headset have different distortion, right? So projecting an image directly from Vive it wouldn't quite match because their lens, screens, fov aren't exactly the same. I assume this is similar to when using phones as vr headsets and having to scan a qrcode for the app to apply the proper barrel distortion for each headset.

From this I assume that ALVR emulates a headset with the same kind of distortion that Quest uses so SteamVR renders them properly?

With this is mind and being the barrel distortion sort of similar to how your FFR applies a distortion, couldn't we just "fake" a kind of distortion that would benefit quality directly on SteamVR?

What I mean is, ALVR creates a fake headset profile with intense barrel similar to how FFR works but not applied to the video but to the game itself. I know we can't add FFR to games but we can (or may can) create a similar effect just as if a headset requires this kind of distortion. Then the headset would revert back the distortion to accommodate to Quest necessary distortion. What we would get with this is basically some sort of FFR on all games. It would help to increase performance on the PC games directly as well as final video quality. Like instead of getting a big image, distort, scale it down, transmit and revert it back, we could directly render at less res, keep quality and we save few steps. Quest would still need to undo the distortion and fit the video. When the GPU it's alleviated due rendering at less res and encoding at less res, it helps a lot on performance.

I am just assuming that we can apply whatever image distortion to SteamVR due the amount of mixed reality headsets that each may use a different one but I may be wrong and may be completely impossible. This also wouldn't be of much interest with people that have enough powerful hardware to play and stream without loss but with any mid range gpu, there is a clear pefromance hit when the GPU needs to stream while also keeping the game running smooth.
Thoughts on this? I hope I am not bothering you but this seems a so extremely interesting subject to me ☺.

@zarik5
Copy link
Member

zarik5 commented Sep 14, 2019

Your idea is great, but the OpenVR API (used by SteamVR) does not allow the driver to instruct the games which distortion to use, they must always submit undistorted frames just like the ones you would watch on a flat monitor (obviously there must be one for each eye). That is because normally in a VR headset the images sent by the games, before being distorted to compensate the lens distortion, they are sort of rotated in 3D to compensate for head rotation using the latest sensor data (oculus calls this Asynchronous Time Warp, ATW), so a resampling is always needed. The same thing applies to the Oculus mobile API, it accepts only undistorted frames.
But as you said, letting the games render with a predetermined distortion could lower the rendering requirements, and then the compositor could just undistort the image before doing its usual business, and if it is done all in one rendering pass there would be no extra resampling artifacts (this on client side, on server side instead we control the compositor and so we could just skip any image processing).

Searching how the oculus runtime distorts the submitted frames without introducing artifacts, I found this: https://en.wikipedia.org/wiki/Lanczos_resampling that apparently is always been used in media players and image editors; I could use this to fix the current FFR implementation. Why aren't they teaching me this at university??

@KuukoShan
Copy link
Author

So I am just suggesting things and you show me some complex math, you are really mean haha.

Jokes asides, I am not sure if my idea was possible or just kinda gave you an idea on how to improve FFR.

I have used Lanczos on editing and as filter on offline 3D apps and it gives very blurry results. I am not sure if this would help on fixing FFR artifacts because I tried and I see no artifacts at all. Yes, cranking the fov strength you can see the edges of lower res, thats the point after all but it doesn't quite look bad. It's that the artifacts that you are referring to? Blurring them out with Lanczos would get rid of the pixelated effect but not sure if being blurry would make it exactly better or would even take away some more details.

BTW, tested different configs on the FFR and it's really an awesome addition. It not just decrease delay, it also reduces greatly the performance that the GPU requires to encode leaving more room for the games itself. Went to a kinda slow world on VRChat and from 45fps at 75% res without FFR, it went to 73fps (probably more but it's the limit) at 100% res with FFR at 4. Yeah, at 4 it's quite agressive but not that much to be honest, not that perceptible in comparison with the performance gains. I went from 10fps on crowded areas with mirrors to 30fps thanks to FFR. Pretty well worth it, it makes the game perfectly playable.

I have to add that my GPU is not the newest one so others may take even more advantage at smaller costs. I run a 780Ti that, while it's quite powerful still, it isn't quite made for high resolutions. Funnily enough, it performs a bit better than a 1060 3GB playing VRChat natively on CV1 which have even less res than Quest and doesn't needs to waste performance on the streaming (both computers same hardware except GPU). So thanks a lot guys for making all this possible :D

@zarik5
Copy link
Member

zarik5 commented Sep 14, 2019

Jokes asides, I am not sure if my idea was possible or just kinda gave you an idea on how to improve FFR.

Sadly no, what I described is not possible (AFAIK) on any vr platform.

With fixing FFR I meant reducing the slight blur it causes on the center of the screen. It's a minor thing.
I'm happy that I helped someone with my addition!

@KuukoShan
Copy link
Author

Ah yeah I noticed some blur on the center of the screen but I wasn't completely sure if it was just my perception after more than 50 tests changing stuff all around.

If that can be improved then it would be killer. Thanks you so much!

@KuukoShan
Copy link
Author

With fixing FFR I meant reducing the slight blur it causes on the center of the screen. It's a minor thing.

I noticed that, looking at that shader example, the center of the image gets "bigger" while the area around get's smaller so you can save on video resolution and later on revert that distortion back on the client side.

All that it's fine but isn't that the cause of the blurriness? I mean, you are resizing bigger the center of the image and later on resizing smaller. Some quality loss will always happen. Would it be possible to, for example, keep the center area of the image at 100% and from there reduce only the sides? Just like the FFR do already but without the part of increasing the center. In this way the center should be kept untouched, not resized neither filtered up and down and hence "should" keep it's quality. I understand that this may give a bit bigger video but the non resized area could be configured with the ratio option and how much it's reduced could be configured with the strenght just like before. Even if I set the video resolution to 150%, the blurriness it's still there due all the resizing and filtering so I guess the only option would be to just leave the center unaltered as I suggested. Just an idea because increasing resolution also increases the performance needed for the GPU as it's just like playing with super sampling.

@zarik5
Copy link
Member

zarik5 commented Oct 6, 2019

My first FFR implementation (warp) was just an experiment, I hadn't thought that resampling would cause blurriness.
Then I wrote a second implementation (slices) that addresses the blurriness problem. It's still not in any release (@JackD83 can you please make a new one?). A problem of this implementation is that you can see the edges where the image gets rendered at lower resolution.
Last month Oculus unveiled the Oculus Link which uses another FFR algorithm, that keeps the center of the image sharp (as my slices implementation) but the edges of the lower resolution region should be less visible. Maybe I'll implement this in ALVR when I have the time.

@JackD83
Copy link
Collaborator

JackD83 commented Oct 8, 2019

@zarik5 done: ev7

@JackD83 JackD83 closed this as completed Feb 29, 2020
ShootingKing-AM pushed a commit that referenced this issue Apr 12, 2024
[pull] master from alvr-org:master
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants