Basic deferred rendering #2624

Open
MPanknin opened this Issue Nov 12, 2012 · 44 comments

Comments

Projects
None yet
@MPanknin

Hi everybody,

just finished a first version of a deferred rendering example using your lovely library.

http://blog.mpanknin.de/?p=848

It currently supports point light sources as well as deferred shadow maps.

It does not yet support everything else such as spot lights, point light shadows, etc. There's still a lot to do, however it can already handle a decent amount of point lights. I was able to render > 1000 point lights on a GTX560 this afternoon. Framerate was something around 50.

The G-buffer has to be filled in two passes unfortunately. One for depth and another for normals. No support for multiple MRT's.

If you are interested then I'm sure that I can release the source sometime. However before doing that I need to clean up a couple of things as the code is a bit messy here and there.

What do you think?

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 12, 2012

Contributor

That's really cool.

Deferred rendering is something I wanted to try, just I was postponing it till there wouldn't be multiple render targets, which finally seems going to happen:

https://www.khronos.org/webgl/public-mailing-list/archives/1210/msg00046.html

BTW on my notebook with Nvidia Quadro 2000M I get ~27 fps on your demo.

It would be nice if you could release your code, even if it's messy, if nothing else at least to get the feel about performance profile of deferred rendering approach.

Contributor

alteredq commented Nov 12, 2012

That's really cool.

Deferred rendering is something I wanted to try, just I was postponing it till there wouldn't be multiple render targets, which finally seems going to happen:

https://www.khronos.org/webgl/public-mailing-list/archives/1210/msg00046.html

BTW on my notebook with Nvidia Quadro 2000M I get ~27 fps on your demo.

It would be nice if you could release your code, even if it's messy, if nothing else at least to get the feel about performance profile of deferred rendering approach.

@mrdoob

This comment has been minimized.

Show comment
Hide comment
@mrdoob

mrdoob Nov 12, 2012

Owner

I get 5fps ;) - Nvidia GeForce 9400M

Owner

mrdoob commented Nov 12, 2012

I get 5fps ;) - Nvidia GeForce 9400M

@tapio

This comment has been minimized.

Show comment
Hide comment
@tapio

tapio Nov 12, 2012

Contributor

Another demo result: 50-60 fps with the default viewport, 30 when zoomed out to view the whole scene. Chrome 24b, Linux, GTX460.

I also started my own deferred render a while ago, but did not come even close to getting anything to render. Glad someone else is trying it as this is very interesting.

Contributor

tapio commented Nov 12, 2012

Another demo result: 50-60 fps with the default viewport, 30 when zoomed out to view the whole scene. Chrome 24b, Linux, GTX460.

I also started my own deferred render a while ago, but did not come even close to getting anything to render. Glad someone else is trying it as this is very interesting.

@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Nov 12, 2012

Collaborator

Hmm... My frame rate doubles when zooming out to the whole scene -- from 20 fps with the default viewport to 40 fps zoomed out. OSX Chrome 23.0 AMD Radeon HD 6750M.

Collaborator

WestLangley commented Nov 12, 2012

Hmm... My frame rate doubles when zooming out to the whole scene -- from 20 fps with the default viewport to 40 fps zoomed out. OSX Chrome 23.0 AMD Radeon HD 6750M.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 13, 2012

Thanks for your comments and posting some test results. It's really interesting.
I just did another run on a GT540M and got around 15-25 fps depending on the zoom factor.

@alteredq Ok sure, why not. However the thing is currently kind of massaged into our web deployment pipeline. I need to extract the relevant parts and put together a single demo file.

@tapio @WestLangley Yes, changing fps depending on the zoom factor is normal behaviour. The lighting calculations are executed inside the fragment shader for the light proxies. So if you move further away, less pixels are affected and less lighting calculation is performed. You also have to be careful, how you place your lights inside the scene. If for example you have a scene with many overlapping light sources, then lighting calculation might be performed multiple times per pixel, and this is of course problematic regarding performance.

Thanks for your comments and posting some test results. It's really interesting.
I just did another run on a GT540M and got around 15-25 fps depending on the zoom factor.

@alteredq Ok sure, why not. However the thing is currently kind of massaged into our web deployment pipeline. I need to extract the relevant parts and put together a single demo file.

@tapio @WestLangley Yes, changing fps depending on the zoom factor is normal behaviour. The lighting calculations are executed inside the fragment shader for the light proxies. So if you move further away, less pixels are affected and less lighting calculation is performed. You also have to be careful, how you place your lights inside the scene. If for example you have a scene with many overlapping light sources, then lighting calculation might be performed multiple times per pixel, and this is of course problematic regarding performance.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 14, 2012

I reduced the number of lights in the demo, as 440 seemed a bit too heavy for some mobile cards.

I reduced the number of lights in the demo, as 440 seemed a bit too heavy for some mobile cards.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 14, 2012

Ok, here is the source

http://data.redplant.de/webgl/deferred/publicdemo.html

There is however one problem left. I'm using a shader minifier to convert my glsl source files into javascript string arrays, and I'm not sure how to tell him not to rename, local variables. Therefore the shader code is not perfectly readable. But maybe this already helps to get an impression on how it's done.

I will update the code, once I figured out how to preserve all variable names.

Cheers

Ok, here is the source

http://data.redplant.de/webgl/deferred/publicdemo.html

There is however one problem left. I'm using a shader minifier to convert my glsl source files into javascript string arrays, and I'm not sure how to tell him not to rename, local variables. Therefore the shader code is not perfectly readable. But maybe this already helps to get an impression on how it's done.

I will update the code, once I figured out how to preserve all variable names.

Cheers

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 14, 2012

Shader code is updated

Shader code is updated

@mrdoob

This comment has been minimized.

Show comment
Hide comment
@mrdoob

mrdoob Nov 14, 2012

Owner

It took me a while to understand that the wireframe spheres were the light emiters.
Maybe using this demo as base would showcase it better.

Owner

mrdoob commented Nov 14, 2012

It took me a while to understand that the wireframe spheres were the light emiters.
Maybe using this demo as base would showcase it better.

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 14, 2012

Contributor

@MPanknin Cool, thanks for sharing.

Would you want to contribute this as three.js example? It could be a good starting point for eventual full blown deferred renderer later on.

If you would just add the necessary files to your three.js repo clone examples, we could merge it and continue with development. Don't worry about having it perfect, what you already posted is a fine start.

Contributor

alteredq commented Nov 14, 2012

@MPanknin Cool, thanks for sharing.

Would you want to contribute this as three.js example? It could be a good starting point for eventual full blown deferred renderer later on.

If you would just add the necessary files to your three.js repo clone examples, we could merge it and continue with development. Don't worry about having it perfect, what you already posted is a fine start.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 16, 2012

@mrdoob You are right, the wireframe spheres really were a bit confusing. I replaced them with solid colored spheres as emitters. I also prepared a new demo based on the link you provided.
You can check the new result at: http://en.redplant.de/projects.html#project_webgl_deferred_demo

This example shows only the lighting part. I removed all the shadow mapping parts and cleaned up the code a bit. I might put together a second example only for the shadow mapping part.

@alteredq Of course. I would be glad to contribute this as an example.

@mrdoob You are right, the wireframe spheres really were a bit confusing. I replaced them with solid colored spheres as emitters. I also prepared a new demo based on the link you provided.
You can check the new result at: http://en.redplant.de/projects.html#project_webgl_deferred_demo

This example shows only the lighting part. I removed all the shadow mapping parts and cleaned up the code a bit. I might put together a second example only for the shadow mapping part.

@alteredq Of course. I would be glad to contribute this as an example.

alteredq added a commit to alteredq/three.js that referenced this issue Nov 20, 2012

Merged @MPanknin's deferred rendering example.
- made it work with r54dev
- cleaned up formatting
- fixed light pass not actually moving light proxies (there is less overdraw but lights now can get culled sometimes, need to fix this)
- changed high-def Walt to be UTF8 model
- made geometry passes work with hierarchies

See #2624
@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 20, 2012

Contributor

@MPanknin Thanks, I merged the example from your branch.

I tried to play a bit with the example, some observations:

  • contrary to my expectations geometry passes are not that bad, which means multiple render targets are probably not going to help as much as hoped for (at least not until having larger scenes)
  • also using smaller render targets (e.g. float RGB instead of RGBA) didn't really help, in fact mixing RGB and RGBA for various render passes made it slower
  • the single biggest bottleneck seems to be overdraw of light proxies in the lighting pass (the best use case seems to be many lights affecting little screen space, global lights are bad)
Contributor

alteredq commented Nov 20, 2012

@MPanknin Thanks, I merged the example from your branch.

I tried to play a bit with the example, some observations:

  • contrary to my expectations geometry passes are not that bad, which means multiple render targets are probably not going to help as much as hoped for (at least not until having larger scenes)
  • also using smaller render targets (e.g. float RGB instead of RGBA) didn't really help, in fact mixing RGB and RGBA for various render passes made it slower
  • the single biggest bottleneck seems to be overdraw of light proxies in the lighting pass (the best use case seems to be many lights affecting little screen space, global lights are bad)
@mrdoob

This comment has been minimized.

Show comment
Hide comment
@mrdoob

mrdoob Nov 21, 2012

Owner

The example is now a little bit more awesomer ;D

Owner

mrdoob commented Nov 21, 2012

The example is now a little bit more awesomer ;D

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 21, 2012

Contributor

;)

Meanwhile I also added support for material colors and textures, just as Walt doesn't have these I switched to Ben.

alteredq@a47adcc

Still so many things to do, e.g. I'm failing miserably on trying to combine together more G-Buffer passes by packing more data into render targets, but it's fun.

Contributor

alteredq commented Nov 21, 2012

;)

Meanwhile I also added support for material colors and textures, just as Walt doesn't have these I switched to Ben.

alteredq@a47adcc

Still so many things to do, e.g. I'm failing miserably on trying to combine together more G-Buffer passes by packing more data into render targets, but it's fun.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 22, 2012

Good to hear, that you are having fun with the demo. :)

@mrdoob Nice!

@alteredq Also nice!

Started implementing Spotlights yesterday. I'm however not sure if it should become a separate shader or if different lighttypes should be #ifdef'ed in a single file. The former would result in some code duplication, the latter in a more complex shader file.

Any ideas?

Good to hear, that you are having fun with the demo. :)

@mrdoob Nice!

@alteredq Also nice!

Started implementing Spotlights yesterday. I'm however not sure if it should become a separate shader or if different lighttypes should be #ifdef'ed in a single file. The former would result in some code duplication, the latter in a more complex shader file.

Any ideas?

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 22, 2012

Contributor

Maybe it would be better to start with separate shaders per light type?

We could then optimize / condense later when we learn more about how things work.

BTW for spotlight proxy, maybe SpotLightHelper thingie from Editor could be used. I made it exactly to match one-to-one SpotLight cone.

Contributor

alteredq commented Nov 22, 2012

Maybe it would be better to start with separate shaders per light type?

We could then optimize / condense later when we learn more about how things work.

BTW for spotlight proxy, maybe SpotLightHelper thingie from Editor could be used. I made it exactly to match one-to-one SpotLight cone.

@mrdoob

This comment has been minimized.

Show comment
Hide comment
@mrdoob

mrdoob Nov 22, 2012

Owner

I guess it could be also good to start moving all this to a WebGLDeferredRenderer or something.

Owner

mrdoob commented Nov 22, 2012

I guess it could be also good to start moving all this to a WebGLDeferredRenderer or something.

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 22, 2012

Contributor

Eventually yes, that's the plan. For the moment I don't understand the problem space good enough yet, it's easier to tinker with pieces being on the application level.

Also, deferred rendering is not really like other renderers, it's not self-standing, it needs WebGLRenderer.

Structurally, the closest to this are maybe our stereographic 3D "effects". They take existing renderer, scene and camera and render them in a different way.

I believe it'll become more clear how to "package" this with more use, e.g. shadow maps or EffectComposer started like this.

Contributor

alteredq commented Nov 22, 2012

Eventually yes, that's the plan. For the moment I don't understand the problem space good enough yet, it's easier to tinker with pieces being on the application level.

Also, deferred rendering is not really like other renderers, it's not self-standing, it needs WebGLRenderer.

Structurally, the closest to this are maybe our stereographic 3D "effects". They take existing renderer, scene and camera and render them in a different way.

I believe it'll become more clear how to "package" this with more use, e.g. shadow maps or EffectComposer started like this.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 23, 2012

Yep, you need WebGLRenderer. On the one hand for generating the g-buffer, but also for handling transparency eventually. One approach seems to be rendering all opaque objects first in a deferred fashion and then doing a second forward pass for all transparent objects.

Yep, you need WebGLRenderer. On the one hand for generating the g-buffer, but also for handling transparency eventually. One approach seems to be rendering all opaque objects first in a deferred fashion and then doing a second forward pass for all transparent objects.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 23, 2012

I forgot. SpotLightHelper is really helpful, indeed. :)

I forgot. SpotLightHelper is really helpful, indeed. :)

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 23, 2012

Contributor

Yup yup, handling transparency would likely need a full blown forward rendering pass.

Meanwhile it occurred to me we could also do it in "yo dawg" way and have WebGLRenderer inside WebGLDeferredRenderer.

BTW @benaadams mention on Twitter they used billboards for light proxies in their deferred rendering experiment:

http://www.illyriad.co.uk/blog/index.php/2011/11/webgl-experiments-illyriads-3d-town/

I wonder how well would that work. Performance wise I guess overdraw would be still as bad as it is now but at least it should help with missing light artefacts when the camera is inside a light volume (because light proxy is then culled away by backface culling, and you can't just turn culling off because proxies are additively blended transparent objects).

Contributor

alteredq commented Nov 23, 2012

Yup yup, handling transparency would likely need a full blown forward rendering pass.

Meanwhile it occurred to me we could also do it in "yo dawg" way and have WebGLRenderer inside WebGLDeferredRenderer.

BTW @benaadams mention on Twitter they used billboards for light proxies in their deferred rendering experiment:

http://www.illyriad.co.uk/blog/index.php/2011/11/webgl-experiments-illyriads-3d-town/

I wonder how well would that work. Performance wise I guess overdraw would be still as bad as it is now but at least it should help with missing light artefacts when the camera is inside a light volume (because light proxy is then culled away by backface culling, and you can't just turn culling off because proxies are additively blended transparent objects).

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Nov 23, 2012

Contributor

For the linked demo (Nov 2011) the unminified script is here, which might make more sense (though is deadline rushed mess): http://www.illyriad.co.uk/3dDemo/newGame_conf3.js

Lights are flipsided and light rendering was:

_gl.disable(_gl.DEPTH_TEST);
_gl.enable(_gl.BLEND);
_gl.blendEquation(_gl.FUNC_ADD);
_gl.blendFunc(_gl.ONE, _gl.ONE_MINUS_SRC_ALPHA);

The shaders are a little over complex as we also decompress paletted/indexed dds files in the shader.

For that demo we did use spheres for lights, though as a merged mesh with light centers as vertex attributes.
We ended up using floating point textures to do colour, normal and position in a single shader, combining colour into x, normal into y, depth(z) into z and x&y into w (loss of precision on y).

For the newer stuff with billboards, when inside the light we switch to screen quads, and try to rebuild the object coords using the frustum corner coords with mixed success (something more like this http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/)

Contributor

benaadams commented Nov 23, 2012

For the linked demo (Nov 2011) the unminified script is here, which might make more sense (though is deadline rushed mess): http://www.illyriad.co.uk/3dDemo/newGame_conf3.js

Lights are flipsided and light rendering was:

_gl.disable(_gl.DEPTH_TEST);
_gl.enable(_gl.BLEND);
_gl.blendEquation(_gl.FUNC_ADD);
_gl.blendFunc(_gl.ONE, _gl.ONE_MINUS_SRC_ALPHA);

The shaders are a little over complex as we also decompress paletted/indexed dds files in the shader.

For that demo we did use spheres for lights, though as a merged mesh with light centers as vertex attributes.
We ended up using floating point textures to do colour, normal and position in a single shader, combining colour into x, normal into y, depth(z) into z and x&y into w (loss of precision on y).

For the newer stuff with billboards, when inside the light we switch to screen quads, and try to rebuild the object coords using the frustum corner coords with mixed success (something more like this http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/)

@benaadams

This comment has been minimized.

Show comment
Hide comment
@benaadams

benaadams Nov 24, 2012

Contributor

We are now using billboards for lights as less vertex attributes to update when moving lights with all lights in a buffergeometry (and you don't need the extra triangles, is just circle equation in box, based on distance to light center), so I think that't more of an implementation artifact than anything else.

However I think turning off the depth test and doing flip-sided rendering with the blend function ONE, ONE_MINUS_SRC_ALPHA would help with the missing light artefact - though you may have to be careful with light placement.

Contributor

benaadams commented Nov 24, 2012

We are now using billboards for lights as less vertex attributes to update when moving lights with all lights in a buffergeometry (and you don't need the extra triangles, is just circle equation in box, based on distance to light center), so I think that't more of an implementation artifact than anything else.

However I think turning off the depth test and doing flip-sided rendering with the blend function ONE, ONE_MINUS_SRC_ALPHA would help with the missing light artefact - though you may have to be careful with light placement.

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 24, 2012

Contributor

@benaadams Lots of useful things, thanks ;)

After playing a bit with deferred rendering, I got impression the biggest performance challenge will be reducing overdraw.

Seems like geometry costs are relatively small compared to pixel costs (or more precisely fragment shading and what I think is called ROP).

Rendering similar numbers of objects of similar complexity would be easy for forward rendering. What I think kills it in deferred rendering is that without z-buffer help and with blending on suddenly there are many more pixels to take care of.

I remember some game developer presentation about optimizing particle effects by getting tighter geometry shape fit to sprite images instead of just usual rectangular billboards - you get higher triangle count but reduce overdraw. Which is kinda in the direction of what we get with full geo proxies.

Anyways, this is cool, many new toys to play with.

Contributor

alteredq commented Nov 24, 2012

@benaadams Lots of useful things, thanks ;)

After playing a bit with deferred rendering, I got impression the biggest performance challenge will be reducing overdraw.

Seems like geometry costs are relatively small compared to pixel costs (or more precisely fragment shading and what I think is called ROP).

Rendering similar numbers of objects of similar complexity would be easy for forward rendering. What I think kills it in deferred rendering is that without z-buffer help and with blending on suddenly there are many more pixels to take care of.

I remember some game developer presentation about optimizing particle effects by getting tighter geometry shape fit to sprite images instead of just usual rectangular billboards - you get higher triangle count but reduce overdraw. Which is kinda in the direction of what we get with full geo proxies.

Anyways, this is cool, many new toys to play with.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 28, 2012

Work in progress on the spotlight.

http://data.redplant.de/webgl/deferred/spot/

Proxy geometry is just a fullscreen quad for now, as I'm having some problems constructing the cone geometry properly. But I'm getting closer.

@benaadams thx for sharing.

Work in progress on the spotlight.

http://data.redplant.de/webgl/deferred/spot/

Proxy geometry is just a fullscreen quad for now, as I'm having some problems constructing the cone geometry properly. But I'm getting closer.

@benaadams thx for sharing.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Nov 30, 2012

Added basic shadowmapping. There are however still some artifacts left.

Added basic shadowmapping. There are however still some artifacts left.

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Nov 30, 2012

Contributor

Cool cool ;). Meanwhile I already got quite far towards WebGLDeferredRenderer.

Contributor

alteredq commented Nov 30, 2012

Cool cool ;). Meanwhile I already got quite far towards WebGLDeferredRenderer.

@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Dec 1, 2012

Awesome, looking forward to it. Can you already share any details on what you are doing?

I found a couple of interesting things that can be done to optimize the existing code. For example view space normals can be en/decoded using a spheremap transform. This would reduce the number of required channels in the g-buffer to two. Here is an article describing this technique (and a couple of others).

http://aras-p.info/texts/CompactNormalStorage.html#method04spheremap

Also it could be beneficial to store normalized view space depth instead of clip space depth. The reconstruction can then be done using the frustum corner technique. This reduces the number of instructions needed for the reconstruction.

http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

Implementing this technique is actually straight forward. I did it once for another demo in rendermonkey.

Also the way the emitter's are currently rendered seems a bit adventurous to me, there must be a nicer way to do it.

I'm sure there is more.

MPanknin commented Dec 1, 2012

Awesome, looking forward to it. Can you already share any details on what you are doing?

I found a couple of interesting things that can be done to optimize the existing code. For example view space normals can be en/decoded using a spheremap transform. This would reduce the number of required channels in the g-buffer to two. Here is an article describing this technique (and a couple of others).

http://aras-p.info/texts/CompactNormalStorage.html#method04spheremap

Also it could be beneficial to store normalized view space depth instead of clip space depth. The reconstruction can then be done using the frustum corner technique. This reduces the number of instructions needed for the reconstruction.

http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

Implementing this technique is actually straight forward. I did it once for another demo in rendermonkey.

Also the way the emitter's are currently rendered seems a bit adventurous to me, there must be a nicer way to do it.

I'm sure there is more.

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Dec 1, 2012

Contributor

Awesome, looking forward to it. Can you already share any details on what you are doing?

It's basically this (I just need to rename it):

https://github.com/alteredq/three.js/blob/dev/examples/js/DeferredHelper.js

I don't have yet solved all the things, but basic structure should be already there.

The idea is that you will use it as other renderers, from API point of view all the magic is happening behind the curtain, you just pass in scene and camera as usual:

https://github.com/alteredq/three.js/blob/dev/examples/webgl_lights_deferred_pointlights.html

On the inside then we should have a playground for trying things. Right now there are three geometry passes but I was planning to merge them into two. Single geometry pass is also possible in theory but it would make for too limited material system.

So I'm aiming for 2x RGBA floats in geometry passes. Whatever we will use will need to be crammed into these.

For example what we use now:

G-buffer color RGBA

  • 1 float encoded vec3 diffuse color
  • 1 float encoded vec3 specular color
  • 1 float encoded vec3 emissive color
  • 1 float shininess

G-buffer normal RGBA

  • 1 float normal x
  • 1 float normal y
  • 1 float normal z
  • 1 float unused

G-buffer depth RGBA

  • 1 float depth
  • 1 float unused
  • 1 float unused
  • 1 float unused

This can be packed into:

G-buffer color RGBA

  • 1 float encoded vec3 diffuse color
  • 1 float encoded vec3 specular color
  • 1 float encoded vec3 emissive color
  • 1 float shininess

G-buffer depth+normal+??? RGBA

  • 1 float encoded vec3normal
  • 1 float depth
  • 1 float unused
  • 1 float unused

Also the way the emitter's are currently rendered seems a bit adventurous to me, there must be a nicer way to do it.

I changed it into having emitters as regular scene objects with pure emissive color. Then emissive color is handled in full screen light pass, similar to directional lights.

Also @won3d did his own optimizations, I'm curious, hope we'll able to merge these.

The biggest performance suck though is overdraw while rendering of light proxies. There are supposed to be some techniques using stencil buffer to help, but it's kinda involved, if I understood well it's something like stencil shadows.

Contributor

alteredq commented Dec 1, 2012

Awesome, looking forward to it. Can you already share any details on what you are doing?

It's basically this (I just need to rename it):

https://github.com/alteredq/three.js/blob/dev/examples/js/DeferredHelper.js

I don't have yet solved all the things, but basic structure should be already there.

The idea is that you will use it as other renderers, from API point of view all the magic is happening behind the curtain, you just pass in scene and camera as usual:

https://github.com/alteredq/three.js/blob/dev/examples/webgl_lights_deferred_pointlights.html

On the inside then we should have a playground for trying things. Right now there are three geometry passes but I was planning to merge them into two. Single geometry pass is also possible in theory but it would make for too limited material system.

So I'm aiming for 2x RGBA floats in geometry passes. Whatever we will use will need to be crammed into these.

For example what we use now:

G-buffer color RGBA

  • 1 float encoded vec3 diffuse color
  • 1 float encoded vec3 specular color
  • 1 float encoded vec3 emissive color
  • 1 float shininess

G-buffer normal RGBA

  • 1 float normal x
  • 1 float normal y
  • 1 float normal z
  • 1 float unused

G-buffer depth RGBA

  • 1 float depth
  • 1 float unused
  • 1 float unused
  • 1 float unused

This can be packed into:

G-buffer color RGBA

  • 1 float encoded vec3 diffuse color
  • 1 float encoded vec3 specular color
  • 1 float encoded vec3 emissive color
  • 1 float shininess

G-buffer depth+normal+??? RGBA

  • 1 float encoded vec3normal
  • 1 float depth
  • 1 float unused
  • 1 float unused

Also the way the emitter's are currently rendered seems a bit adventurous to me, there must be a nicer way to do it.

I changed it into having emitters as regular scene objects with pure emissive color. Then emissive color is handled in full screen light pass, similar to directional lights.

Also @won3d did his own optimizations, I'm curious, hope we'll able to merge these.

The biggest performance suck though is overdraw while rendering of light proxies. There are supposed to be some techniques using stencil buffer to help, but it's kinda involved, if I understood well it's something like stencil shadows.

@won3d

This comment has been minimized.

Show comment
Hide comment
@won3d

won3d Dec 4, 2012

Sorry, @alteredq, I set up my github and everything with the intent of getting things merged, but for some reason work expects me to...do work. In any case, I think your packing ideas are pretty much what I had done, at least for the deferred shading.

I like the idea of having a separate WebGLDeferredShadingRenderer. I'm not sure if you should pass in a WebGLRenderer, or whether it should be created within. Perhaps the latter; that way you can interpret { antialias: true } using FXAA, since it otherwise doesn't make sense to do MSAA for deferred shading. Now, if you had a WebGLDeferredLightingRenderer, it is a different story.

Re: G-buffer packing

I'm not sure if it is worthwhile to pack normal if you're just going to leave some channels unused. That being said, I can vouch for the stereographic normal projection (it's what I did, and mentioned in @MPanknin's Aras link). Also, if you're storing depth in a floating point, you want to map it so that the near plane is 1.0 and the far plane is 0.0. You might also want to make the far plane at infinity, since that removes another source of numerical imprecision. If you want to use an 8-bit fixed point representation for depth, you should use a log mapping:

http://tulrich.com/geekstuff/log_depth_buffer.txt

For color buffers, maybe try chroma subsampling: http://graphics.cs.williams.edu/jcgt/published/0001/01/02/

That is, if you're doing 2 passes, maybe one can be low-rez (have the 6 chroma channels).

Re: light proxy overdraw

I mentioned over e-mail that one way to solve this would be to share the depth buffer between the g-buffer pass and the light proxy rendering. That would also remove the explicit discard in the light proxy shader, which deals with the case when the light is behind the scene. To cull lights that would only light the background, you could do something really simple with stencil (set a bit to true for each non-background pixel).

won3d commented Dec 4, 2012

Sorry, @alteredq, I set up my github and everything with the intent of getting things merged, but for some reason work expects me to...do work. In any case, I think your packing ideas are pretty much what I had done, at least for the deferred shading.

I like the idea of having a separate WebGLDeferredShadingRenderer. I'm not sure if you should pass in a WebGLRenderer, or whether it should be created within. Perhaps the latter; that way you can interpret { antialias: true } using FXAA, since it otherwise doesn't make sense to do MSAA for deferred shading. Now, if you had a WebGLDeferredLightingRenderer, it is a different story.

Re: G-buffer packing

I'm not sure if it is worthwhile to pack normal if you're just going to leave some channels unused. That being said, I can vouch for the stereographic normal projection (it's what I did, and mentioned in @MPanknin's Aras link). Also, if you're storing depth in a floating point, you want to map it so that the near plane is 1.0 and the far plane is 0.0. You might also want to make the far plane at infinity, since that removes another source of numerical imprecision. If you want to use an 8-bit fixed point representation for depth, you should use a log mapping:

http://tulrich.com/geekstuff/log_depth_buffer.txt

For color buffers, maybe try chroma subsampling: http://graphics.cs.williams.edu/jcgt/published/0001/01/02/

That is, if you're doing 2 passes, maybe one can be low-rez (have the 6 chroma channels).

Re: light proxy overdraw

I mentioned over e-mail that one way to solve this would be to share the depth buffer between the g-buffer pass and the light proxy rendering. That would also remove the explicit discard in the light proxy shader, which deals with the case when the light is behind the scene. To cull lights that would only light the background, you could do something really simple with stencil (set a bit to true for each non-background pixel).

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Dec 6, 2012

Contributor

@won3d Thanks for the ideas, a lot of food for thought. Don't worry about merging, you help how you can ;)

Sidenote: it's awesome to see a paper with a live WebGL example, hope this will become a thing.

Contributor

alteredq commented Dec 6, 2012

@won3d Thanks for the ideas, a lot of food for thought. Don't worry about merging, you help how you can ;)

Sidenote: it's awesome to see a paper with a live WebGL example, hope this will become a thing.

alteredq added a commit to alteredq/three.js that referenced this issue Dec 11, 2012

alteredq added a commit to alteredq/three.js that referenced this issue Dec 11, 2012

alteredq added a commit to alteredq/three.js that referenced this issue Dec 12, 2012

WebGLDeferredRenderer: flipped light proxies faces orientation and de…
…pth test.

It makes lights work when the camera is inside the light volume and somehow it's also 10% faster.

You just need to make sure camera far plane is far enough to encompass the whole light proxy, otherwise there are not lit slices of proxy sphere.

(I don't really understand why flipping the depth test works, originally depth test was supposed to be just disabled)

See #2624
@MPanknin

This comment has been minimized.

Show comment
Hide comment
@MPanknin

MPanknin Dec 17, 2012

Another work in progress. Deferred Arealights.

http://data.redplant.de/webgl/deferred/spot/deferred_arealight.html

It is based on this post by Arkano22 over at gamedev. This, however, is the deferred version, as you might have quessed.

Currently it's only diffuse lighting, no specular yet and it also does not support shadows. In the gamedev thread it is suggested to use a very blurred PCF shadowmap, but I haven't tried that. Proxy geometry is rendered as fullscreen quads (I was lazy), so there is definitely room for improvement.

@alteredq I didn't have time to look at WebGLDeferredRenderer yet, so rendering in this example is still done similarly to the first version. I'll try to merge with your code, once I find the time to read through it, as there seem to be quite a few updates and changes to the first version. ;)

Another work in progress. Deferred Arealights.

http://data.redplant.de/webgl/deferred/spot/deferred_arealight.html

It is based on this post by Arkano22 over at gamedev. This, however, is the deferred version, as you might have quessed.

Currently it's only diffuse lighting, no specular yet and it also does not support shadows. In the gamedev thread it is suggested to use a very blurred PCF shadowmap, but I haven't tried that. Proxy geometry is rendered as fullscreen quads (I was lazy), so there is definitely room for improvement.

@alteredq I didn't have time to look at WebGLDeferredRenderer yet, so rendering in this example is still done similarly to the first version. I'll try to merge with your code, once I find the time to read through it, as there seem to be quite a few updates and changes to the first version. ;)

@bhouston

This comment has been minimized.

Show comment
Hide comment
@bhouston

bhouston Dec 17, 2012

Contributor

@MPanknin Beautiful

Contributor

bhouston commented Dec 17, 2012

@MPanknin Beautiful

@WestLangley

This comment has been minimized.

Show comment
Hide comment
@WestLangley

WestLangley Dec 17, 2012

Collaborator

@MPanknin Yes. Beautiful. Thank you for sharing your work.

Collaborator

WestLangley commented Dec 17, 2012

@MPanknin Yes. Beautiful. Thank you for sharing your work.

@alteredq

This comment has been minimized.

Show comment
Hide comment
@alteredq

alteredq Dec 17, 2012

Contributor

@MPanknin Cool cool, don't worry too much about merging, if not you I should eventually get to this (the same like for spotlights and shadows).

For area lights, I was already thinking about them: I guess we should have AreaLight object, which at least for now would be just ignored by forward renderer.

Contributor

alteredq commented Dec 17, 2012

@MPanknin Cool cool, don't worry too much about merging, if not you I should eventually get to this (the same like for spotlights and shadows).

For area lights, I was already thinking about them: I guess we should have AreaLight object, which at least for now would be just ignored by forward renderer.

@mrdoob

This comment has been minimized.

Show comment
Hide comment
@mrdoob

mrdoob Dec 17, 2012

Owner

@MPanknin Very sexy! ^^

Owner

mrdoob commented Dec 17, 2012

@MPanknin Very sexy! ^^

alteredq added a commit to alteredq/three.js that referenced this issue Dec 19, 2012

WebGLDeferredRenderer: merged with @MPanknin's spotlights.
Todo:

- physically based specular
- wrap around lighting
- light cone proxy instead of fullscreen quad
- light distance attenuation
- move spot angle cos out of shader
- move light direction out of shader
- shadow maps

See #2624

alteredq added a commit to alteredq/three.js that referenced this issue Dec 22, 2012

WebGLDeferredRenderer: merged @MPanknin's area lights.
To be continued ...

todo:
- optimize vectors that don't need to be computed in shaders
- use material albedo
- add specular term
- wrapAround lighting (if possible)
- make attenuation parameters uniforms or defines instead of hardcoding them
- this is not using surface normal anywhere, this can't be right?
- maybe some box proxy instead of full-screen quad

See #2624
@Norstep

This comment has been minimized.

Show comment
Hide comment
@Norstep

Norstep Jan 28, 2013

So I have been working around with the Deferred Renderer here and I can not seem to remove a deferred point light from rendering. I remove the light and any associated meshes from the scene yet the actual light still renders! Any ideas on how to pull it out completely?

Norstep commented Jan 28, 2013

So I have been working around with the Deferred Renderer here and I can not seem to remove a deferred point light from rendering. I remove the light and any associated meshes from the scene yet the actual light still renders! Any ideas on how to pull it out completely?

@deadForce

This comment has been minimized.

Show comment
Hide comment
@deadForce

deadForce Jun 18, 2014

Any updates on this?

Any updates on this?

@deadForce

This comment has been minimized.

Show comment
Hide comment
@deadForce

deadForce Jun 23, 2014

Do anyone also know this: https://github.com/YuqinShao/Tile_Based_WebGL_DeferredShader it seems to be implemented using three.js, I'll be experimenting with deferred rendering in the near future, probably will be using this project as a base.

Do anyone also know this: https://github.com/YuqinShao/Tile_Based_WebGL_DeferredShader it seems to be implemented using three.js, I'll be experimenting with deferred rendering in the near future, probably will be using this project as a base.

@mrdoob

This comment has been minimized.

Show comment
Hide comment
@mrdoob

mrdoob Jun 23, 2014

Owner

@deadForce Interesting project! Seems like it only uses three.js for the OBJLoader though.

Owner

mrdoob commented Jun 23, 2014

@deadForce Interesting project! Seems like it only uses three.js for the OBJLoader though.

@mflux

This comment has been minimized.

Show comment
Hide comment
@mflux

mflux Nov 19, 2015

2015 calling here. Is anyone actively working on a new WebGLDeferredRenderer at this point? I want to know before I dive into this rabbit hole; or, worst-case-scenario, abandon THREE altogether (please no) as unfortunately my project's visual design absolutely requires it.

Are there any known significant roadblocks due to the way THREE was re-engineered from R71->R72 that blocks the deferred rendering path?

mflux commented Nov 19, 2015

2015 calling here. Is anyone actively working on a new WebGLDeferredRenderer at this point? I want to know before I dive into this rabbit hole; or, worst-case-scenario, abandon THREE altogether (please no) as unfortunately my project's visual design absolutely requires it.

Are there any known significant roadblocks due to the way THREE was re-engineered from R71->R72 that blocks the deferred rendering path?

@bhouston

This comment has been minimized.

Show comment
Hide comment
@bhouston

bhouston Nov 19, 2015

Contributor

I think the issue is WebGL 1.0 doesn't really support multiple render targets except through the poorly supported WEBGL_draw_buffers extension (<50% of browsers, almost no mobile devices.) Thus leading to deferred not really being easy to implement in a way that is efficient. I think everything changes with WebGL 2.0, if and when that arrives.

Contributor

bhouston commented Nov 19, 2015

I think the issue is WebGL 1.0 doesn't really support multiple render targets except through the poorly supported WEBGL_draw_buffers extension (<50% of browsers, almost no mobile devices.) Thus leading to deferred not really being easy to implement in a way that is efficient. I think everything changes with WebGL 2.0, if and when that arrives.

@mflux

This comment has been minimized.

Show comment
Hide comment
@mflux

mflux Nov 20, 2015

That's good to know, thanks!

As WebGLDeferredRenderer was always part of Extras, is the plan currently to make it into main WebGLRenderer when better support is available? We can use WEBGL_draw_buffers as-is for our projects since our target is not mobile, and even the previous version of the renderer offered decent performance already.

Anyway, if anyone has plans to do a deferred rendering pipeline down the road with THREE.js in WebGL 2.0 then I can just keep working as if nothing's changed on R71.

mflux commented Nov 20, 2015

That's good to know, thanks!

As WebGLDeferredRenderer was always part of Extras, is the plan currently to make it into main WebGLRenderer when better support is available? We can use WEBGL_draw_buffers as-is for our projects since our target is not mobile, and even the previous version of the renderer offered decent performance already.

Anyway, if anyone has plans to do a deferred rendering pipeline down the road with THREE.js in WebGL 2.0 then I can just keep working as if nothing's changed on R71.

@won3d

This comment has been minimized.

Show comment
Hide comment
@won3d

won3d Nov 20, 2015

If you are willing to do some jiggery-pokery to pack multiple values into a
single floating-point render target, then you can do deferred rendering
without a huge hit. If you have a copy of WebGL Insights, then you can
check out my friend's (Nick Brancaccio of Floored, Inc., where I am also a
technical advisor) chapter on their production deferred renderer, Luma.

That chapter is the second best thing Nick wrote for that book -- the best
being his author blurb.

On Thu, Nov 19, 2015 at 7:41 PM, Michael Chang notifications@github.com
wrote:

That's good to know, thanks!

As WebGLDeferredRenderer was always part of Extras, is the plan currently
to make it into main WebGLRenderer when better support is available? We can
use WEBGL_draw_buffers as-is for our projects since our target is not
mobile, and even the previous version of the renderer offered decent
performance already.

Anyway, if anyone has plans to do a deferred rendering pipeline down the
road with THREE.js in WebGL 2.0 then I can just keep working as if
nothing's changed on R71.


Reply to this email directly or view it on GitHub
#2624 (comment).

won3d commented Nov 20, 2015

If you are willing to do some jiggery-pokery to pack multiple values into a
single floating-point render target, then you can do deferred rendering
without a huge hit. If you have a copy of WebGL Insights, then you can
check out my friend's (Nick Brancaccio of Floored, Inc., where I am also a
technical advisor) chapter on their production deferred renderer, Luma.

That chapter is the second best thing Nick wrote for that book -- the best
being his author blurb.

On Thu, Nov 19, 2015 at 7:41 PM, Michael Chang notifications@github.com
wrote:

That's good to know, thanks!

As WebGLDeferredRenderer was always part of Extras, is the plan currently
to make it into main WebGLRenderer when better support is available? We can
use WEBGL_draw_buffers as-is for our projects since our target is not
mobile, and even the previous version of the renderer offered decent
performance already.

Anyway, if anyone has plans to do a deferred rendering pipeline down the
road with THREE.js in WebGL 2.0 then I can just keep working as if
nothing's changed on R71.


Reply to this email directly or view it on GitHub
#2624 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment