Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing Submit texture resolution at runtime with OpenGL #216

Open
Dmytry opened this issue Aug 4, 2016 · 7 comments
Open

Changing Submit texture resolution at runtime with OpenGL #216

Dmytry opened this issue Aug 4, 2016 · 7 comments

Comments

@Dmytry
Copy link

Dmytry commented Aug 4, 2016

I've tried to change the resolution of an OpenGL texture used with Submit, with a slider. I'm using a single texture for both eyes, the texture is also used as a render target. I submit the left eye first.

It causes incorrect or absent picture on HMD until I restart my game. It seems that the first-ever call to Submit caches something about the texture format and it is not invalidated on the subsequent calls.

@jrbudda
Copy link

jrbudda commented Aug 5, 2016

it's a 'feature'. see #147

@Dmytry
Copy link
Author

Dmytry commented Aug 5, 2016

I found that it works if the OpenGL's texture ID is a different number...
but then that seem to waste video memory. And if you get a texture ID that
was ever used with a different resolution, it's screwed.

So basically the only workaround would be to have a list of resolutions and
"deallocate" unused textures by setting their size to 1x1 or something
instead of glDeleteTextures. Which would probably be highly stupid when it
comes to video memory layout.

On Fri, Aug 5, 2016 at 6:27 AM, jrbudda notifications@github.com wrote:

it's a 'feature'. see #147
#147


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#216 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC5hBlJeO5qDzo7ctmO_cE5mYJZyVpaks5qcx4wgaJpZM4Jc8la
.

@Dmytry
Copy link
Author

Dmytry commented Aug 5, 2016

Or keep a larger texture and only use a part of it but that also wastes
video memory and might not work.

On Fri, Aug 5, 2016 at 11:36 AM, Dmytry Lavrov dmytryl@gmail.com wrote:

I found that it works if the OpenGL's texture ID is a different number...
but then that seem to waste video memory. And if you get a texture ID that
was ever used with a different resolution, it's screwed.

So basically the only workaround would be to have a list of resolutions
and "deallocate" unused textures by setting their size to 1x1 or something
instead of glDeleteTextures. Which would probably be highly stupid when it
comes to video memory layout.

On Fri, Aug 5, 2016 at 6:27 AM, jrbudda notifications@github.com wrote:

it's a 'feature'. see #147
#147


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#216 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAC5hBlJeO5qDzo7ctmO_cE5mYJZyVpaks5qcx4wgaJpZM4Jc8la
.

@ekmett
Copy link

ekmett commented Sep 7, 2016

For right now I just blow out my resolve buffer to the largest size I can adapt quality to. Sadly this doesn't work for user-selected super-sampling levels. Leaving me with the above mentioned 'tombstone' method. You can do a bit better I suppose, if you only have a finite number of sizes, you can 'deallocate' to a 1x1 texture, but then reuse it when you need a new buffer of the same size. This actually can happen a fair bit, e.g. if the user is scrubbing back and forth on the super-sampling slider in your ui indecisively, or while testing.

@Dmytry
Copy link
Author

Dmytry commented Sep 7, 2016

You could also combine the methods, i.e. you have a set of resolutions that you make textures at ("tombstoning" them as needed, I like your terminology) and for in-between resolutions you use a piece of the bigger texture.

What I would be concerned about is that there might be a huge amount of video memory allocated by OpenVR for each new size - I haven't tested for that.

I'm just not implementing it for now.

Codes4Fun added a commit to Codes4Fun/RBDOOM-3-BFG that referenced this issue Nov 15, 2016
Added console command vr_resolutionScale, but it doesn't work realtime,
see: ValveSoftware/openvr#216

Could try to allocate a larger buffer, and see if I can simply use less
of it, depending on scale. My understand is this is the preferred way of
doing this because it can allow a game to adapt to drops in performance
by lowering the resolution, but this is problematic in the current
architecture of this engine, since it is designed to reallocate a bunch
of buffers when the resolution changes.

Anyway, for now you have to restart after changing this.
@DuncanHopkinsFoundry
Copy link

DuncanHopkinsFoundry commented Mar 8, 2017

I have also been caught out by this issue.
Is there any chance of having some API to tell the compositor to flush any assumptions it has made (i.e. cached data like texture sizes). In our case the resolution changes based on user input and not any automated system, so it happens rarely.

@m-schuetz
Copy link

m-schuetz commented Sep 3, 2018

Also just encountered this problem with resizing. Workaround is to use a different texture ID, or probably allocate a much larger texture in advance. Allocating different texture IDs isn't a sustainable solution though, and both solutions seem unnecesarely cumbersome. What's strange is that changing the number of samples in a multisample texture works, however.

Any chance to get a texture cache reset option?

Edit: No, using different texture IDs works fine, actually. I guess it's only the last one that is being cached? I'm now creating new textures before deleting the old ones, thereby getting new texture IDs on resize so all is good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants