Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No way to use TextureVolume on transformed texture? #418

Closed
paulmelis opened this issue May 18, 2020 · 5 comments
Closed

No way to use TextureVolume on transformed texture? #418

paulmelis opened this issue May 18, 2020 · 5 comments

Comments

@paulmelis
Copy link

paulmelis commented May 18, 2020

As using a TextureVolume is the new way of slicing I was wondering if the 2.1 API currently supports slicing only in a fairly limited way? I.e. TextureVolume takes a VolumetricModel reference, meaning it can only access the untransformed original volume extent. So any geometry to be colored by a TextureVolume must be located in the same untransformed volume extent. The docs say [t]he volume texture type implements texture lookups based on 3D world coordinates of the surface hit point on the associated geometry. I read this as the sample position on the transformed geometry (i.e. Instance) is used, and not the untransformed geometry (i.e. Geometry). But that implies slicing geometry placement is limited by the untransformed volume extent and a geometry cannot be moved without influencing the volume-based coloring.

The use case I was thinking of is having multiple copies of the same volume data side-by-side in camera view, but with each volume instance showing a different slicing geometry in its extent (alternative use case: using the same slicing geometry but with different volume datasets side-by-side). But the world-space placing of the slicing geometries cannot be matched with the untransformed volume extent so the use case is currently impossible to realize.

Just curious if my conclusion is correct here?

@johguenther
Copy link
Contributor

We are about to add 3D texture transformations for the volume texture (similar to the existing 2D texture coordinate transforms for texture2d) , for the reason and usecases you described. During implementation we also realized that it is probably better to have the TextureVolume lookups based on local object coordinates (of the geometry it is applied to) instead of world coordinates. Opinions?

@paulmelis
Copy link
Author

Ah, 3D texture transforms could indeed solve this, albeit in a bit convoluted way when the slicing geometry has a complex transform. As it looks like the current Texture2D transform support does allow a general 2x2 matrix, but that leaves out the translation part so generally setting the inverted transform of the geometry (in 3D) is not possible if translation can't be specified (yes, as separate parameters, but that forces one to decompose the matrix into those parts).

Having TextureVolume lookups in object-space versus world-space: neither is optimal, I would say. If the lookup is done in object-space that would force you to specify the underlying geometry directly in the right location with respect to the volume extent as transformations aren't available. In world-space at least some transformation on the slice geometry can be done (e.g. orienting a slice plane within the volume), but still locks the geometry placement to the untransformed volume.

Best of both worlds would be to specify the slicing geometry and volume both in world-space to allow maximum freedom. But I'm sure that has both performance and design downsides. I guess being able to set a 3D texture transform on the TextureVolume to transform into volume space with the slicing geometry being freely placeable in world-space is good enough.

@johguenther
Copy link
Contributor

The plan is to have both: lookups in object-space, plus 3D transformations (as 3x4 affine matrix, including translation). This should work nicely as long as the manipulation (like orientation) of the slice geometry is done via vertex position updates (or updating the plane equation) and not via an ospInstance transformation (which then would need to be countered by setting the inverse as texture volume transform).

@paulmelis
Copy link
Author

How about a matrix parameter on TextureVolume that specifies how the geometry used for sampling is to be mapped into the volumetric domain? That would also need to updated on each movement of the sampling geometry (or volume when movedin world-space), but if S is the object-to-world transform for the sampling geometry and V object-to-world for the volume you would only need to keep S*(V^-1) set on the texture volume (if my quick math is correct). Having only to update a matrix on TextureVolume is less of a hassle than having to update geometry at the vertex-level on every interaction as that means updating the vertex buffer, updating the normals, updating the Geometry object, etc. Doesn't the latter also involve rebuilding of the BVH and such? Might get expensive for more complex sampling geometry.

@johguenther
Copy link
Contributor

Volume texture look-ups are now in local objects space, and materials have 3D texture transformations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants