New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal to deprecate some generated texture coordinates in IfcTextureCoordinateGenerator #157
Comments
Before I write a more substantial proposal for this I've reached out to michalis from X3D about different projection methods for coordinates (box, flat, etc): https://community.osarch.org/discussion/comment/10512/#Comment_10512 |
IFC has two primary usecases for generated coordinates:
Out of the box, the default texture coordinates generated for arbitrary shapes (IndexedFaceSets) in X3D try to be "clever" and have rather bespoke rules on choosing a texture orientation and how it is clipped. The rules are documented here. However for specific shapes that aren't arbitrary, like rectangles, spheres, etc, similar to IFC, X3D defines special texture mapping rules. For the first IFC usecase of an image on a plane, the equivalent in X3D is the texture mapping shown in their Rectangle2D geometry definition here. In modern tools like Blender, this correlates to this setup (note how "Generated" is used and the texture projection type is set to "Flat"): However, when we get to the general problem of how to get an arbitrary image to "naturally" wrap around an arbitrary shape, there is no known general solution in the X3D / glTF world. Instead, X3D specifies specific texture coordinate rules for specific shapes (e.g. spheres, extrusions). Note that IFC has come to a similar conclusion - it has rules for spheres, blocks, and extrusions. glTF does not understand specific shapes and instead only deals with generic meshes and so does not have this solution. End-user artistic applications attempt to add extra cleverness when faced with arbitrary shapes. For example, in Blender there is a "Box" or "Sphere" or "Tube" projection setting. There are many variations of this cleverness and they are not generally interoperable between applications. X3D does not attempt to specify this clever behaviour due to its inconsistency across end-user apps - it's very difficult for implementers to replicate. I suspect IFC would come to same conclusion: should we make this complexity the responsibility of the implementers, or the responsibility of the model author in their platform who then generates UVs. Another advantage of not specifying this and just relying on UVs is that the UVs remain locked in case the model is ever animated in the future. If it is generated, what happens when the object moves? Right now, IFC doesn't specify animated objects. In the future, especially with transport domains and when 4D gets smarter, who knows. Therefore I would propose for IFC to follow in X3D's direction and:
To do this, change this paragraph in IfcSurfaceStyleWithTextures:
First add this sentence:
Then, there are two options to add: Option 1: Texture stretches to input geometry. Easy :D (note: I prefer this - also I reckon an architect would prefer this)
Option 2: Texture stretches to longest dimension of input geometry, and is cropped elsewhere when aspect ratio does not match between texture and geometry. More complex, but prevents warping of texture.
(Note: prose by Michalis from X3D) |
Can we get some votes in for Option 1 or Option 2 so this can move to decided? My personal preference is Option 1, given that many users would likely mess up the exact aspect ratio, so stretching to fit is a good assumption to make so it "looks right" out of the box, and if they actually cared about it, they would properly model their geometry or edit their image anyway. |
I agree, stretch to fit. |
+1 for option 1 |
Cheers, upgrading to decided so that in the next few days I can help implement this. |
Currently we mark enum items as deprecated using solely the Md. I propose to keep it like that. We can't model it in Express and changelogs for enums are currently calculated based on the express, so there isn't any real upside currently in expressing this in UML. So I'm not attaching the |
If we get the time I will put in some images and examples. |
Current situation
Proposal
The ambiguity of the two camera coordinates should probably be resolved to choose one or the other.
In addition, the other coordinates are outdated or kinda crazy :) Maybe it should just be culled down to "COORD" + one more for the camera coordinates?
The text was updated successfully, but these errors were encountered: