Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth buffering doesn't function when drawing to bitmaps in the OpenGL backend. #947

Open
RogerLevy opened this Issue Oct 22, 2018 · 23 comments

Comments

Projects
None yet
6 participants
@RogerLevy
Copy link

RogerLevy commented Oct 22, 2018

Unlike with the D3D backend, you have to call al_set_new_bitmap_depth before creating your drawing surface or else depth buffering will not work when you draw to it.

I suggest that the default should be that all created (not loaded) bitmaps get depth buffers that are the same bitdepth as the display. Then to turn it off you'd call al_set_new_bitmap_depth(0) before creating your bitmap in order to disable it.

@RogerLevy RogerLevy changed the title Depth buffering breaks in the OpenGL backend if you are drawing to bitmaps. Depth buffering doesn't function when drawing to bitmaps in the OpenGL backend. Oct 22, 2018

@fatcerberus

This comment has been minimized.

Copy link
Contributor

fatcerberus commented Oct 30, 2018

This seems worth noting, from the OpenGL documentation:

If a texture has a depth or depth-stencil image format and has the depth comparison activated, it cannot be used with a normal sampler. Attempting to do so results in undefined behavior.

@SiegeLord

This comment has been minimized.

Copy link
Member

SiegeLord commented Oct 31, 2018

The expected behaviour is for you to have to call al_set_new_bitmap_depth. The fact that this works without this on D3D is a curious, but unsupported feature. I have attempted to properly implement this on D3D, but it wasn't obvious. It's not clear exactly what's happening right now on D3D (e.g. what depths are you getting etc).

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 1, 2018

My guess is that the display's depth buffer is still active.

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 1, 2018

For what it's worth, making a depth buffer on created bitmaps the default would save a lot of head scratching for people. I think considering Allegro is meant to be an "easy" library doing that would follow the design philosophy.

@fatcerberus

This comment has been minimized.

Copy link
Contributor

fatcerberus commented Nov 1, 2018

Right, but according to the quote above textures with depth buffering enabled can’t actually be used as normal textures for drawing (i.e. with a sampler2d) so that would make for even more headscratching since stuff like al_draw_bitmap works by drawing textured quads.

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 1, 2018

So how is one supposed to draw textures they've drawn to? (with depth buffering)

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 1, 2018

It sounds like Allegro just has to turn off the depth test for the bitmap being drawn (and then re-enable it if it was enabled) on D3D

@SiegeLord

This comment has been minimized.

Copy link
Member

SiegeLord commented Nov 2, 2018

I appreciate the ease-of-use argument, but to counter it I want to point out a number of 'performance bugs' that are caused by Allegro's ease-of-use default. E.g. the ALLEGRO_NO_PRESERVE_TEXTURE flag is something we have to commonly tell people to use to get faster to-bitmap drawing when they don't need the bitmap contents preserved between frames. I'd rather this not become yet another thing that you have to turn off to get good performance in the common case.

That said, perhaps making this conditional on the display being created with the depth buffer might be a possibility. Either way, I'd rather things work consistently first, and then we can figure out a good default. My main concern is that I don't think D3D is actually working correctly right now.

@fatcerberus

This comment has been minimized.

Copy link
Contributor

fatcerberus commented Nov 2, 2018

Counterpoint for ALLEGRO_NO_PRESERVE_TEXTURE specifically: I actually don’t bother to turn this flag on because whether or not I lose the rendering context is out of my control and it’s nice not to have to check for that manually everywhere. In most cases I’ve found that even if I’m using a bitmap as an FBO I still want its contents to persist across frames.

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 2, 2018

I would argue that ease-of-use should always take priority over performance. If a person doesn't realize they can turn off that flag to get better performance, either the docs don't make it clear enough, or the API isn't designed optimally. For example, you could have a separate function to create the "unstable" bitmap and immediately do something with it in a callback. That would guarantee proper usage and avoid confusion.

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 2, 2018

Getting back to it, pretty sure depth compare is trivial or free nowadays and the only thing you sacrifice is memory (and it's pretty easy to optimize afterwards, which is what you should always do)

@fatcerberus

This comment has been minimized.

Copy link
Contributor

fatcerberus commented Nov 2, 2018

You can always write a wrapper to make a difficult-to-use API easier to use. If the API makes tradeoffs in the name of ease of use at the expense of performance, there’s no way to fix that from the outside.

As an example: Allegro has associated with each bitmap a modelview and projection matrix (and lots of other state, but let’s just look at the matrices for now). Every time the shader is changed or a different bitmap is selected as the render target, new matrices have to be sent to the GPU. This is cheap, but not free, and there’s no way for the application to optimize away the extra state changes if they’re not needed (for example if it’s using custom matrix uniforms instead of the default one). This isn’t just theoretical either: it’s a real performance bottleneck I’ve had to debug in the past.

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 2, 2018

It's not really an expense if the user can optimize it easily, though.

In the case of needing the best performance, and there isn't any satisfactory recourse, one can always drop down to OpenGL/D3D in places.

Don't overthink things. You can't make a library be everything to everyone.

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 2, 2018

(And I realize at this point I'm probably fighting a losing battle here but this is just my personal design philosophy.)

@fatcerberus

This comment has been minimized.

Copy link
Contributor

fatcerberus commented Nov 2, 2018

Personally, I’d rather not pay for things I don’t use if it can be avoided. Yes, you can disable it afterwards if you don’t need it... but unless you can make a case that the large majority of users will want depth buffering on all al_create_bitmap() created bitmaps then I can’t see that this is a useful default. Why force a depth buffer to be allocated if it’s going to be immediately freed over half the time?

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 2, 2018

Because it doesn't really cost the user anything to just have it there. Not if you factor in how useful it would be to not have to think about that, and have rendering with depth compare just work. Remember Allegro's roots as an easy-to-use library; from my view, preserving that tradition is what distinguishes it, and that we're essentially working around the complexity of modern API's.

on the other hand we could just add an al_create_render_bitmap with all the options in the arguments and not drive ourselves insane over the question ;)

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 2, 2018

and yes anyone can create a wrapper function but that is not a real con ... the choice not to make a choice is a practice that's way too common, and puts the burden on the user, and increases errors.

@EdgarReynaldo

This comment has been minimized.

Copy link
Contributor

EdgarReynaldo commented Nov 2, 2018

The reasonable suggestion is to opt in to depth buffering. It doesn't make sense for all bitmaps to have depth, only a certain few that will be drawn to should have a depth buffer. Making every bitmap have a depth buffer sounds like it is causing problems with shaders. We don't want to break existing functionality in favor of new features. The problem with shaders should be noted in the docs, as well as an explicit opting in to depth buffering. It should be a new bitmap flag the user has to set in my opinion.

@dos1

This comment has been minimized.

Copy link
Contributor

dos1 commented Nov 5, 2018

Because it doesn't really cost the user anything to just have it there.

It does cost the VRAM. One of my Allegro games takes almost 2GB of texture memory just thanks to lots of hand-drawn animations :P It was already crashing on 32-bit platforms until I realized I can safely turn the ALLEGRO_NO_PRESERVE_TEXTURE flag on when using desktop OpenGL. Adding a depth buffer everywhere by default would likely be a literal breaking change there. Some drivers may optimize this out, but I wouldn't count on it.

@fatcerberus

This comment has been minimized.

Copy link
Contributor

fatcerberus commented Nov 5, 2018

Wait, is that true? ALLEGRO_NO_PRESERVE_TEXTURE can be safely enabled for OpenGL and I won’t randomly lose the contents? That’s good to know if so.

@dos1

This comment has been minimized.

Copy link
Contributor

dos1 commented Nov 5, 2018

As far as I understand it from browsing the code, ALLEGRO_NO_PRESERVE_TEXTURE matters only on DirectX, Android and SDL backends. The documentation doesn't even agree with itself on this though ;) (see #958)

(however, it still takes up the memory on OpenGL backend, even though I can't seem to find the code that would actually use that backup)

@RogerLevy

This comment has been minimized.

Copy link
Author

RogerLevy commented Nov 5, 2018

@kwellwood

This comment has been minimized.

Copy link

kwellwood commented Nov 7, 2018

Design philosophy aside, there actually is a bug here that breaks depth buffers on the OpenGL backend and it has nothing to do with ALLEGRO_NO_PRESERVE_TEXTURE. I discovered it while writing a deferred shading renderer, where I draw light and geometry depths to a separate texture with a depth buffer and then draw that texture to the display's backbuffer.

@SiegeLord take a look at (master branch) line 207 of src/opengl/ogl_fbo.c where the ALLEGRO_FBO_INFO is populated. There is a typo that sets the depth buffer's width to its height, and the height is not set at all.

I use the OpenGL backend on Windows and this bug didn't bite me until I tried running my code on Mac OS. I assume Linux would have the same issue.

Here's an Hg patchfile so you can see the fix: master_ogl_dbo.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.