Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ogre2: Implement Global Illumination using VCT #435

Open
3 of 4 tasks
darksylinc opened this issue Sep 26, 2021 · 5 comments
Open
3 of 4 tasks

Ogre2: Implement Global Illumination using VCT #435

darksylinc opened this issue Sep 26, 2021 · 5 comments
Assignees
Labels
enhancement New feature or request

Comments

@darksylinc
Copy link
Contributor

darksylinc commented Sep 26, 2021

Note this ticket is for tracking my work. I'm the one implementing it.

The work can currently be found in matias-global-illumination branch

Desired behavior

Obtain realtime Global Illumination when using the Ogre2 engine

Ogre2 provides various GI methods, out of which VCT (Voxel Cone Tracing) is the most reliable and accurate one for simulations.

The class hierarchy is the following:

  • GlobalIlluminationBase → Contains options shared with most GI solutions
    • GlobalIlluminationVct → Interface to handle VCT specific parameters
      • BaseGlobalIlluminationVct → ign-rendering abstraction / implementation detail
        • Ogre2GlobalIlluminationVct → Engine implementation

The reason for having GlobalIlluminationBase is that there may be multiple solutions implemented in the future; including raytracing

What about render engines that have natural GI (e.g. OptiX)

It is unclear. Technically speaking GlobalIlluminationBase is an object where users can specify GI parameters. Users can create more than one if they wish to use different parameters, but only one can be active at the same time.

Right now GlobalIlluminationBase only contains simple properties such as BounceCount. Probably render engines with natural GI like ray and path tracers could move their bounce count settings to this class.

Since it is likely that there will be a raytracing implementation in the future taking advantage of VK_KHR_ray_tracing_pipeline, it should be smart that raytracing-specific options should be centralized into GlobalIlluminationBase (or derived implementations if too specific), that includes raytracer/pathtracer engines like OptiX.

This would provide users a familiar interface that can handle multiple engines; and avoid getting into the situtation where certain settings are in a completely different place when changing engines

Remaining tasks

  • Write GI/VCT code
  • Light changes should notify GI to recalculate lighting
  • Make sample
  • Make unit test
@darksylinc darksylinc added the enhancement New feature or request label Sep 26, 2021
@osrf-triage osrf-triage added this to Inbox in Core development Sep 26, 2021
@chapulina chapulina moved this from Inbox to In progress in Core development Sep 27, 2021
@darksylinc
Copy link
Contributor Author

Quack quack

Cuack

(voxelized version of the ogre2_demo example, which is the first step to get VCT lighting right)

@darksylinc
Copy link
Contributor Author

darksylinc commented Oct 3, 2021

Ok it's definitely working, you can see the reflections (without VCT, all you can see is the skybox, not even the floor is on the reflections):
00
02
03

And the GI contribution (see the green tint on the yellow duck):
04

Three things I'm noticing:

  • GI reflections break a a bit when the voxel resolution is non-square. Looks like an Ogre bug
    • I'm now focusing on this since that can be a lot of VRAM waste
    • Update: Fixed in upstream
  • Voxelization when octant auto division isn't { 1, 1, 1 } is broken. This is definitely an Ogre bug
  • The example has 1 directional light and 12 additional lights. Because HDR is not being used, each spot/point light is just as powerful as the sun; thus their GI contributions blow out of proportion (*)

(*) This made me remember that VCT was thought mainly for "sun light" GI contribution, but it supports all types. To support HDR it would have to use RGBA16_FLOAT targets for the voxel lighting, which is 2x as expensive (in VRAM cost) than the default RGBA8_UNORM target.

We don't even support switching to RGBA16_FLOAT because that's a lot of memory and there was no need (it's easy to support, just a few more lines of code). But if you care about simulation accuracy and you have a monster GPU with 16-24GB of VRAM, I guess you have the luxury of not caring.

@iche033
Copy link
Contributor

iche033 commented Oct 5, 2021

But if you care about simulation accuracy and you have a monster GPU with 16-24GB of VRAM, I guess you have the luxury of not caring.

hmm that sounds a little too memory intensive. I would hold off on supporting HDR for now until there is a need for it.

The 12 lights were added to the ogre2 demo for testing a while back. Now I think about it, I wonder if we should just have a simple Cornell box environment to demo this feature :)

@darksylinc
Copy link
Contributor Author

hmm that sounds a little too memory intensive. I would hold off on supporting HDR for now until there is a need for it.

OK without explanation that feels a little bit overblown. It's all about user settings and what they consider good enough.

We keep 4 voxels around (though we can reduce it to 1 if lights are never updated):

  1. Albedo, RGBA8_UNORM, 4 bytes
  2. Normals RGBA8_SNORM, 4 bytes
  3. Emissive RGBA8_UNORM, 4 bytes (this one probably can go away if we don't allow emissive materials to emit light in GI)
  4. Light calculation. RGBA8_UNORM, 4 bytes

So that's 12 bytes per voxel.

If we go to HDR, we'd need:

  1. Albedo, RGBA8_UNORM, 4 bytes
  2. Normals RGBA8_SNORM, 4 bytes
  3. Emissive RGBA16_HALF, 8 bytes (this one probably can go away, or be kept at 4 bytes and hack a multiplier)
  4. Light calculation. RGBA16_HALF, 8 bytes

So that's between 16 & 24 bytes per voxel.

A (medium-sized?) scene the user may want to use 1024x1024x32. Maybe less would be enough, maybe more. It depends on what the simulation expects.

So: 1024x1024x32 x (16|24) = between 512MB & 768MB of VRAM + mipmaps (and more if we turn on anisotropic, I can't remember how that was calculated). Mipmaps add a 1.15x overhead, so you actually need 588-883MB

If the user thinks, this is not enough, I need 4096x4096x64 then cost grows dramatically, between 16GB and 24GB (on a 24GB GPU congrats, you've ran out of memory, can't be done).

But a compromise 2048x2048x64: between 4GB and 6GB (+1.15x in mipmaps).

So how much memory you'll need depends on how much the user thinks he needs for his scene. If he's already thankful he has GI at all (i.e. vs having nothing) and considers 128x128x32 enough, then of course you need no monster GPU.

Note that it is perfectly valid to use one setting (e.g. 128x128x32) for real time preview on your laptop, and then go overboard with a different setting when you need accurate results in the simulation on a powerful workstation.

@iche033
Copy link
Contributor

iche033 commented Oct 6, 2021

I see, thanks for the explanation. So sounds like we should add APIs (and SDF params) to allow users to specify voxel size and HDR. As for the default values, we probably should not go overboard so that it works on less powerful machines (128x128x32?) and have HDR off.

@j-rivero j-rivero removed this from In progress in Core development May 6, 2022
@chapulina chapulina added this to Inbox in Core development via automation Jul 28, 2022
@chapulina chapulina moved this from Inbox to In progress in Core development Jul 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: In progress
Core development
In progress
Development

No branches or pull requests

2 participants