Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incomplete rendered texture #3

Closed
andreasplesch opened this Issue Jan 20, 2018 · 13 comments

Comments

Projects
None yet
2 participants
@andreasplesch
Copy link
Contributor

andreasplesch commented Jan 20, 2018

@michaliskambi

This comment has been minimized.

Copy link
Member

michaliskambi commented Jan 21, 2018

Looks like the possible texture size (or cubemap face size?) of your GPU / OpenGL is smaller than requested in the file (1024). View3dscene (actually, Castle Game Engine) then automatically uses a smaller size (512). But it seems that something is wrong, and it still tries to use a larger size (1024) elsewhere.

I cannot reproduce it here, for me even GeneratedCubeMapTexture { update "ALWAYS" size 4096 } works (although updating it every frame is slow). The size 8192 does not work OK for me, but there's a fair warning "Unsupported framebuffer configuration, will fallback to glCopyTexSubImage2D approach", so I'm not sure is this the same problem as you have.

  1. Does view3dcene show any warnings for you? (File -> View Warnings)

  2. Can you generate the debug log (it contains various information about your GPU) by running on the command-line view3dscene --debug-log -> log.txt, and attach log.txt here?

  3. The rendered_texture_with_background.x3dv also has a Box on the right with RenderedTexture, but you commented that out, right? You're testing only with Teapot covered by GeneratedCubeMapTexture?

Thanks!

Here's how the complete rendered_texture_with_background.x3dv should look like, for reference:

rendered_texture_with_background_0

@andreasplesch

This comment has been minimized.

Copy link
Contributor Author

andreasplesch commented Jan 21, 2018

michaliskambi added a commit to castle-engine/castle-engine that referenced this issue Jan 23, 2018

@michaliskambi

This comment has been minimized.

Copy link
Member

michaliskambi commented Jan 23, 2018

I see that the GLVersion.BuggyFBOCubeMap is set in your case. See https://github.com/castle-engine/castle-engine/blob/master/src/base/opengl/castleglversion.pas#L132 . In this case, we fallback to rendering without FBO. We render to the regular back buffer of your window, and then use glCopyTexSubImage2D to copy the buffer contents to your texture.

One of the drawbacks of this approach (besides the fact that it's slower) is that the size is limited by your current window size (which in turn is limited by Windows to your desktop size).

It would be simplest to make BuggyFBOCubeMap = false in your case. I tried it now, and the new view3dscene should be built on http://michalis.ii.uni.wroc.pl/view3dscene-snapshots/ . Can you download it, and

  1. check that now "Buggy FBO rendering to cube map texture" is False (besides being visible in --debug-log, this report is also visible if you enter menu item Help -> OpenGL Information).

  2. and then check how does the cubemap look?

  3. Unrelated to this particular issue, if we manage to solve this like this, I may ask you for some additional tests on your GPU :) We have some more things that are disabled on Intel GPU, and looking at the code I think they may be "disabled too widely". They can possibly be enabled on newer driver versions.

@michaliskambi

This comment has been minimized.

Copy link
Member

michaliskambi commented Jan 23, 2018

The new view3dscene should be built on http://michalis.ii.uni.wroc.pl/view3dscene-snapshots/ now:)

@andreasplesch

This comment has been minimized.

Copy link
Contributor Author

andreasplesch commented Jan 23, 2018

ok, looks good with the snapshot.

image
image

Intel HD4400 on win10pro (Dell Latitude 7240)

michaliskambi added a commit to castle-engine/castle-engine that referenced this issue Jan 23, 2018

@michaliskambi

This comment has been minimized.

Copy link
Member

michaliskambi commented Jan 23, 2018

OK, good that it works. Thanks for testing!

  1. Can you also test on your older Intel GPU? The GPU from your previous comment was older, as far as OpenGL is concerned:
Version string: 3.1.0 - Build 9.17.10.3040 
Renderer: Intel(R) HD Graphics 3000

The new one has:

Version string: 4.3.0 - Build 20...
Renderer: Intel(R) HD Graphics 4400

So that was an older graphic card model (3000 vs 4400), with an older driver (Build 9 vs Build 20) exposing an older OpenGL API (3.1 vs 4.3). In particular, the difference in driver version (Build 9 vs Build 20) can be significant in my experience, new version may be more stable.

So, if you have access to it, please also test on that older GPU. My "fix" keeps BuggyFBOCubeMap as "true" for Build versions <= 8, and "false" for Build versions >= 9.

  1. Unrelated to this issue, if I can, I would like to use your access to Intel GPUs on Windows to test additional fix :) I just committed additional change to CGE, that makes "glGenerateMipmap" no longer considered "buggy".

In effect, the warning "OpenGL implementation doesn't allow any glGenerateMipmap* version..." should disappear, and in Help -> OpenGL Information you should see GenerateMipmap available (and reliable): True, and in general this GPU will be treated like a normal non-buggy modern GPU.

I would very much appreciate if you would test the newest version of view3dscene on both your Intel GPUs (4400 and 3000). Simply open rendered_texture_with_background.x3dv and check is it fine. (You can also check other things in https://github.com/castle-engine/demo-models/tree/master/rendered_texture and https://github.com/castle-engine/demo-models/tree/master/cube_environment_mapping . But the rendered_texture_with_background.x3dv is probably the most important one, it's trivial and it exercises both cubemaps and regular 2D texture.)

http://michalis.ii.uni.wroc.pl/view3dscene-snapshots/ already contains the updated view3dscene version. Thank you!

@andreasplesch

This comment has been minimized.

Copy link
Contributor Author

andreasplesch commented Jan 23, 2018

Yes, here is the older HD3000 test with the latest snapshot:
image
image
The mipmap warning disappeared as well.

@michaliskambi

This comment has been minimized.

Copy link
Member

michaliskambi commented Jan 23, 2018

Great! Thank you for testing -- so I'm closing this issue, as everything seems fixed:)

@andreasplesch

This comment has been minimized.

Copy link
Contributor Author

andreasplesch commented Jan 23, 2018

Just for future reference on HD3000, win7:

render_texture_tweak_size: after pressing s four times:

view3dscene: VRML/X3D warning: Framebuffer error, generated texture not possible
: Maximum renderbuffer (within framebuffer) size is 4096 x 4096 in your OpenGL i
mplementation, while we require 8192 x 8192

This seems ok, 4k textures.

shaders:

view3dscene: VRML/X3D warning: Cannot use GLSL shader for shape "Sphere": Geomet
ry shaders not supported by your OpenGL version

two_textures.x3dv
view3dscene: VRML/X3D warning: Cannot use GLSL shader for shape "IndexedFaceSet"
: Fragment shader not compiled:
WARNING: 0:12: 'texture2D' : implicit conversion is allowed from GLSL 1.20
ERROR: 0:12: 'texture2D' : no matching overloaded function found (using implicit
conversion)
WARNING: 0:13: 'texture2D' : implicit conversion is allowed from GLSL 1.20
ERROR: 0:13: 'texture2D' : no matching overloaded function found (using implicit
conversion)
ERROR: 0:11: 'assign' : cannot convert from 'const float' to 'FragColor 4-compo
nent vector of float'

cellular_texturing.x3dv: slow 10fps
cellular_texturing_mirror_fun.x3d: 1fps

@michaliskambi

This comment has been minimized.

Copy link
Member

michaliskambi commented Jan 23, 2018

We only support geometry shaders since OpenGL 3.2 (see https://castle-engine.sourceforge.io/x3d_implementation_shaders.php#section_geometry_old ). So this is OK too, if you test on older GPU (your "Intel(R) HD Graphics 3000" has OpenGL 3.1).

two_textures.x3dv warnings: should be fixed, but please test (I could not reproduce them, I sit on NVidia GPU now and it's less strict about some GLSL constructs).

cellular_texturing slowness: that's also OK, I'm afraid, these demos do something really heavy in the shader :)

@andreasplesch

This comment has been minimized.

Copy link
Contributor Author

andreasplesch commented Jan 24, 2018

Just a few observations on the HD4400
cellular_texturing.x3dv : >60fps (!)
cellular_texturing_mirror_fun.x3d: >60 fps(!) but looks a little strange (very red)

render_texture_tweak_size: after pressing s five times: 8k tex still ok.

Logger "L": received field "set_dimensions" (MFInt32). Time: 13.81. Sending node: "MyScript" (Script). Value: set_dimensions [
16384, 16384,
]
view3dscene: OpenGL warning: Check errors before checking FBO status
OpenGL error (1285): There is not enough memory left to execute the command.
view3dscene: VRML/X3D warning: Framebuffer error, generated texture not possible: Framebuffer check failed: INCOMPLETE_ATTACHMENT: Not all framebuffer attachment points are "framebuffer attachment complete" (FBO error number 36054)

geometry_shader.x3dv (with OpenGL 4.3)

view3dscene: VRML/X3D warning: Cannot use GLSL shader for shape "Sphere": Geometry shader not compiled:
ERROR: 0:30: '[' : layout must be declared before indexing unsized varying input array with a variable
ERROR: 0:31: '[' : layout must be declared before indexing unsized varying input array with a variable
ERROR: 0:33: '[' : layout must be declared before indexing unsized varying input array with a variable
ERROR: 0:52: '[' : layout must be declared before indexing unsized varying input array with a variable
ERROR: 0:53: '[' : layout must be declared before indexing unsized varying input array with a variable
ERROR: 0:55: '[' : layout must be declared before indexing unsized varying input array with a variable

two_textures.x3dv:
view3dscene: VRML/X3D warning: Cannot use GLSL shader for shape "IndexedFaceSet": Fragment shader not compiled:
WARNING: 0:4: '' : #version directive missing
ERROR: 0:12: 'texture2D' : no matching overloaded function found (using implicit conversion)
ERROR: 0:14: 'texture2D' : no matching overloaded function found (using implicit conversion)
ERROR: 0:12: 'assign' : cannot convert from 'const mediump float' to 'FragColor 4-component vector of mediump float'

@michaliskambi

This comment has been minimized.

Copy link
Member

michaliskambi commented Jan 24, 2018

cellular_texturing_mirror_fun.x3d: >60 fps(!) but looks a little strange (very red)

The red is correct -- it has a red background, and everything is a mirror, so everything gets red-tainted.

render_texture_tweak_size with 16384

At some point, the error "There is not enough memory left to execute the command" is unavoidable, I'm afraid. It's the one OpenGL error that can in theory occur always, and is beyond the control of the application. We check things like Width > GLFeatures.MaxRenderbufferSize earlier (where MaxRenderbufferSize = glGetInteger(GL_MAX_RENDERBUFFER_SIZE)), but it does not guarantee that we will avoid "out of memory" error.

geometry_shader.x3dv
two_textures.x3dv

Hm, I would have to experiment a bit to understand what is the problem with GLSL code there. It works on my current GPU (NVidia on Linux). I'll keep on eye on them. I'll also separate these to a separate bugreport, to not forget it.

@andreasplesch

This comment has been minimized.

Copy link
Contributor Author

andreasplesch commented Jan 25, 2018

Ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.