New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a more efficient voxel renderer #14674

Open
pchote opened this Issue Jan 6, 2018 · 3 comments

Comments

Projects
None yet
1 participant
@pchote
Member

pchote commented Jan 6, 2018

Key points:

  • Render directly into the screen buffer, without using an intermediate FBO. Once #14415 is done the rendering can be split into opaque vs translucent passes to help reduce context switches (especially when the artwork is batched).
  • Abandon idea of combining a generic ModelRenderer that works for both voxels and triangle based models - they need to be treated differently if we want good perf.
  • Load voxel data into a 3d textures instead of creating geometry. Maintain separate textures for colour vs normal indices. The first implementation could use one texture pair per voxel to defer the packing problem, but the rest of the plumbing should happily support batched artwork.
  • VoxelRenderer writes the front faces of the voxel's bounding box into a VBO. The 6 metadata floats on the vertex store the (x,y,z) texture coordinate for each vertex and the (dx, dy, dz) line-of-sight vector through the texture volume (calculated by inverting the rotations that are applied to the bounding box). TODO: Need to work out a way to constrain the back face position so that the fs knows when to stop scanning - encoding in the length of the line-of-sight vector would be best, but will need to split the front faces by the edges of the back faces so that the GPU can interpolate both sets of coordinates correctly (doable but fiddly).
  • Voxel vertex shader passes the metadata as two vec3s to the fragment shader.
  • Fragment shader has immediate access to the line of sight through the texture volume. Follow http://www.cse.yorku.ca/~amana/research/grid.pdf to find the front-most voxel. The normal data is fetched from the same position in the normal texture, and the depth offset is simply the length of the scanned line of sight vector.
@pchote

This comment has been minimized.

Show comment
Hide comment
@pchote

pchote Jan 6, 2018

Member

Implementing this should fix #10534 and #11083.

Member

pchote commented Jan 6, 2018

Implementing this should fix #10534 and #11083.

@pchote

This comment has been minimized.

Show comment
Hide comment
@pchote

pchote Mar 4, 2018

Member

but will need to split the front faces by the edges of the back faces so that the GPU can interpolate both sets of coordinates correctly (doable but fiddly).

https://github.com/mono/sysdrawing-coregraphics/blob/master/Utilities/ClipperLib/clipper.cs implements polygon intersection, which would make this much simpler.

Member

pchote commented Mar 4, 2018

but will need to split the front faces by the edges of the back faces so that the GPU can interpolate both sets of coordinates correctly (doable but fiddly).

https://github.com/mono/sysdrawing-coregraphics/blob/master/Utilities/ClipperLib/clipper.cs implements polygon intersection, which would make this much simpler.

@pchote

This comment has been minimized.

Show comment
Hide comment
@pchote

pchote Apr 30, 2018

Member

There are two problems with the proposal above:

  1. The current vertex format stores 9 floats, which isn't enough (I forgot about palettes).
  2. It seems obvious after discussions with @chrisforbes that 3D textures are a no-go due to driver issues.

I've come up with a slightly convoluted way to split the required data across our normal vertex, sprite, and palette locations. This means we could in principle render sprites and voxels using the same shader if we are happy to deal with wildly different branches in the FS.

Add 3 new floats to the vertex format, then define the instance-specific data as:

X, Y, Z: position in the world (as normal).
S, T, U: Fragment coordinate on the front face of the voxel bounds.
V, P, C: Rotation of the voxel in the world (this defines both the line of sight vector through the volume, and the angle to the light source).
J, K (new): Metadata palette row, column.
L (new): Color palette row.

The voxel data is flattened to a 1D buffer that is concatenated together in a sheet layer. The first three bytes are reserved to hold the width, length, height of the model, followed by the color and then normal data.

The offset to this data (i.e. the u,v,channel coord of the first pixel) is stored together with the normals palette row in the "metadata palette" referenced by the J,K coords in the vertex data. The buffer is read by scanning across the sheet, stepping to the next row when the edge of the texture is reached, for a length of 2 * W * L * H + 3 per voxel section.

When rendering a fragment we use J,K to look up the palette and then use that to look up the voxel data header. The header is used to define the mapping from voxel x,y,z coordinates to the color and normals index in the sprite sheet. We then step along the vector defined by S,T,U,V,P,C to find the first non-empty voxel, and then use the mappings to look up the color and normal values from the palette. The normal vector is then rotated by the inverse of V,P,C to get it into screen space before dotting it with the global light vector stored in a uniform. The lighting-adjusted color is written into FragColor, and Z + <length walked along the vector> is written into FragDepth.

@chrisforbes do you see any issues with this?

Member

pchote commented Apr 30, 2018

There are two problems with the proposal above:

  1. The current vertex format stores 9 floats, which isn't enough (I forgot about palettes).
  2. It seems obvious after discussions with @chrisforbes that 3D textures are a no-go due to driver issues.

I've come up with a slightly convoluted way to split the required data across our normal vertex, sprite, and palette locations. This means we could in principle render sprites and voxels using the same shader if we are happy to deal with wildly different branches in the FS.

Add 3 new floats to the vertex format, then define the instance-specific data as:

X, Y, Z: position in the world (as normal).
S, T, U: Fragment coordinate on the front face of the voxel bounds.
V, P, C: Rotation of the voxel in the world (this defines both the line of sight vector through the volume, and the angle to the light source).
J, K (new): Metadata palette row, column.
L (new): Color palette row.

The voxel data is flattened to a 1D buffer that is concatenated together in a sheet layer. The first three bytes are reserved to hold the width, length, height of the model, followed by the color and then normal data.

The offset to this data (i.e. the u,v,channel coord of the first pixel) is stored together with the normals palette row in the "metadata palette" referenced by the J,K coords in the vertex data. The buffer is read by scanning across the sheet, stepping to the next row when the edge of the texture is reached, for a length of 2 * W * L * H + 3 per voxel section.

When rendering a fragment we use J,K to look up the palette and then use that to look up the voxel data header. The header is used to define the mapping from voxel x,y,z coordinates to the color and normals index in the sprite sheet. We then step along the vector defined by S,T,U,V,P,C to find the first non-empty voxel, and then use the mappings to look up the color and normal values from the palette. The normal vector is then rotated by the inverse of V,P,C to get it into screen space before dotting it with the global light vector stored in a uniform. The lighting-adjusted color is written into FragColor, and Z + <length walked along the vector> is written into FragDepth.

@chrisforbes do you see any issues with this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment