Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To-do list for upgrade command #5

Open
lilleyse opened this issue May 3, 2018 · 4 comments
Open

To-do list for upgrade command #5

lilleyse opened this issue May 3, 2018 · 4 comments

Comments

@lilleyse
Copy link
Contributor

lilleyse commented May 3, 2018

Tentative changes for 1.0 that should be handled in the upgrade command

Progress is in https://github.com/AnalyticalGraphicsInc/3d-tiles-tools/tree/2.0-tools

@javagl javagl transferred this issue from CesiumGS/3d-tiles-validator Oct 3, 2022
javagl added a commit that referenced this issue Apr 14, 2023
Tileset processing with pipelines
@javagl
Copy link
Contributor

javagl commented Apr 17, 2023

This issue originally referred to the upgrade from "pre-1.0" to "1.0" tilesets. Some of the bullet points have therefore been addressed already, or are tracked in dedicated issues, or have become obsolete (the latter also insofar that gltf-pipeline does the actual upgrade of glTF data).

The question of the scope of the upgrade command is still relevant, though, and maybe now more than before, because it may now refer to upgrading 3D Tiles 1.0 to 1.1.

There is a (somewhat preliminary) set of options in the TilesetUpgrader class that show what the upgrade is currently doing, and this includes some of the bullet points from above.

Beyond that, one could consider to extend the upgrade functionality to cover more of what is described in the 3D Tiles 1.0 to 1.1 migration guide for tile formats.

@javagl
Copy link
Contributor

javagl commented Apr 30, 2023

The migration guide gives a few hints about how to "emulate" several features of the previous tile formats in glTF. I'll try to start sorting out which of these steps could be part of an automated process.

On the highest level, the entry point would be to add this functionality to the TilesetUpgrader. It should be possible to enable/disable the upgrade based on the input type, so there would probably be upgradeB3dmToGlb, upgradeI3dmToGlb, and upgradePntsToGlb flags in the UpgradeOptions in the TilesetUpgrader.

(The case of CMPT is special in many ways, and has to be discussed separately)

The actual upgrade functionality would then be implemented in the TilesetUpgrader. This upgrade will include modifications to the tileset JSON. These modifications will at least be things like changing the content.uri from .b3dm to .glb. But it may be even more, for example, when converting a (single-content) .cmpt into a (multiple-contents) list of .glb contents.

(The latter has some constraints, though. It may not be done for implicit tile content, for example....)

A large part of the infrastructure for this kind of modification already exists. For example, the b3dmToGlb functionality that is currently part of the pipeline content stages could also be applied to do more than just extracting the GLB from the B3DM. So on this level, many migration steps can be boiled down to the seemingly trivial core:

"The migration function receives a B3DM/PNTS/I3DM buffer. It creates an equivalent GLB buffer"

And what "equivalent" means, is sorted out in the following sections


Feature detection and error handling

A large part of the features from the legacy tile formats can be migrated automatically. In the first versions, there will be some things that are not migrated yet, but will be migrated in future versions. But there also are cases that can "never" be migrated automatically. One overarching question is how to handle that. A high-level, straightforward approach could be to examine the input data and its Feature Table and Batch Table, and if it contains anything that can not be migrated, then just print a warning and leave the content unmodified.


The RTC_CENTER

For all tile formats, the migration guide says

The RTC_CENTER can be added to the translation component of the root node of the glTF asset.

There already is a function for replacing the CESIUM_RTC extension in a glTF with such a node transform (in GtlfUtilities). Extending that to be a function like applyRootTransform(gltf, center) would be trivial, and then it could be made part of the upgrade command. Whatever the input tile format is: We'd parse the Batch Table JSON, extract the RTC_CENTER, and pass it to that function.

The BATCH_ID and Batch Table

This applies to all tile formats as well. The actual ID can be translated into the EXT_mesh_features extension. This should not be much more than translating the _BATCHID into a _FEATURE_ID_0 attribute 🤞

Translating the contents of the Batch Table into glTF may be a bit more tricky. Broadly speaking, it would involve translating the current Batch Table JSON information - specifically, the "binary body references" - into a 3D Metadata representation, and the Batch Table binary data into property attributes, so that they can be represented with the EXT_structural_metadata extension.

It should be possible to implement that somewhat generically, juggling only with JSON and buffers, and I don't foresee any "large" technical hurdles here - roughly speaking: EXT_structural_metadata is more powerful/expressive than the Batch Table, so it should be possible to convert everything without losing information, with two caveats:

  • The batch table may contain data in plain JSON form, and it is not necessarily possible to determine the data type in the strictest sense. Some things that can be detected, e.g. whether something is an array of strings or numbers. But for numbers, the decision of whether something is modeled as an INT8 or a FLOAT64 has to be based on "guesses" from the actual values. This may involve pseudocode functions like allAreIntegers(data), allAreIntegersBetween(data, UINT8.min, UINT8.max), and so on.
  • There is the 3DTILES_batch_table_hierarchy. I don't know how widely it is used, and therefore, how important it is to be able to migrate that, and whether it's possible to translate this without losses (and without too obscure quirks)

There actually already is code for translating batch tables to glTF metadata in cesium-native, at https://github.com/CesiumGS/cesium-native/blob/26f54b617984ec3c5c9015aa2927c7fe2688120e/Cesium3DTilesSelection/src/BatchTableToGltfFeatureMetadata.cpp . I haven't looked at the details, in terms on how complete it is, or even which glTF extension it is targeting - it sounds like it's not necessary the latest version of the proposed glTF extensions - but it also includes code that mentions Batch Table Hierarchies, so it might be a good start to get an idea about how this can be translated at all.

PNTS to GLB

This could be easy for the case of plain point cloud data data. For quantized/compressed data, there are some possible (incremental) stages that could be supported.

It's hard to make an estimate about how many PNTS files in the wiled actually use quantized positions, normals, or colors, or certain forms of compression. But assuming that there is a considerable number of PNTS files that do not use quantization or compression, this conversion could be a good candidate for a first upgrade functionality that is offered.

Plain (unquantized/uncompressed) PNTS

Looking at the point semantics:

  • POSITION, NORMAL: Standard glTF attributes
  • RGBA, RGB: Standard glTF attributes with VEC4 or VEC3
  • CONSTANT_RGBA (global): Could be emulated with a standard glTF material

So these could be handled relatively easily.

Quantized data in PNTS

The POSITION_QUANTIZED, RGB565, and NORMAL_OCT16P point semantics can not directly be represented in standard glTF. A possible "roadmap" for supporting them could be:

  • (Stage 1 - trivial): Print a warning and leave the content unmodified
  • (Stage 2 - easy): Decode the data and store it as standard glTF data (may increase the file size)
  • (Stage 3 - efforts TBD): Convert positions and (maybe) normals to use the KHR_mesh_quantization. For normals, one could also consider the EXT_meshopt_compression octahedral filter

(Different test cases for point clouds that contain these features can be found via https://github.com/CesiumGS/cesium/blob/db2669aae149e965a3578fd0343384a33f83543c/Specs/Scene/PointCloud3DTileContentSpec.js#L35-L70)

Compressed data in PNTS

The PNTS format supports special forms of compression - including Draco compression via 3DTILES_draco_point_compression. This is currently not supported in glTF, due to KhronosGroup/glTF#1809. Similar to the quantized case, there are possible stages of support:

  • (Stage 1 - trivial): Print a warning and leave the content unmodified
  • (Stage 2 - easy): Decode the data and store it as standard glTF data (may increase the file size)
  • (Stage 3 - efforts TBD): Decode the (Draco-compressed) PNTS data and store it using the EXT_meshopt_compression extension in glTF

For both the compressed and quantized cases, one could consider building some infrastructure that may be useful, even outside of the context of ~"trying to upgrade some particular data". Having generic functions like

const points : IterableIterator<Cartesian3> = readPoints(source);
writePoints(target, points);

that hide the question of whether the "source" and "target" are quantized or compressed could be useful in other areas as well. The degree of generalization (or how much effort to put into that) would have to be decided.

B3DM to GLB

Beyond the batch IDs and Batch Tables (mentioned above), there is not much that has to be converted here.

I3DM to GLB

The main chunk of work for this upgrade is covered with the Batch Table conversion (mentioned above).

The actual instancing can be translated into EXT_mesh_gpu_instancing.

Similar to the "Quantized data in PNTS" section: There are some properties thare are quantized, and there are the same three possible "Stages" for upgrading them:

  • (Stage 1 - trivial): Print a warning and leave the content unmodified
  • (Stage 2 - easy): Decode the data and store it as glTF with EXT_mesh_gpu_instancing (may increase the file size)
  • (Stage 3 - efforts TBD): Convert to a different compression method...

The last one refers to EXT_meshopt_compression: I'll have to read the spec here to see whether this can actually be applied to the data that is used for the transforms in EXT_mesh_gpu_instancing (i.e. whether it's possible to combine these extensions in that manner)

CMPT to GLB

Whatever is done for CMPT, it has to be applied "recursively": For example, the CMPT may contain one CMPT (with an I3DM and a B3DM), and another B3DM. The I3DM and B3DMs would have to be migrated first, converting them into GLBs. The result would always be a list of GLBs.

Then, broadly, there are two possible ways of handling this list of GLBs:

  • Store the resulting GLBs as multiple contents
  • Try to create a single GLB from these GLBs

The first one would probably be pretty easy: The functions for extracting all GLBs from a CMPT are already there (as part of the cmptToGlb command). The functions for converting a single (explicit) tile content into multiple contents would also be easy to add into the tilesetProcessing classes.

But the second option - creating a merged GLB - would be preferable, for two reasons:

  • If someone has created a CMPT with 100 B3DMs, then the result of the upgrade should not be a tile that uses 100 contents (causing 100 web requests when loading). The structure would be closer to what has been modeled with the CMPT
  • The resulting (single) GLB would allow applying this approach even for implicit tilesets

(The problem with converting CMPT in implicit tiling into multiple GLB contents is that the CMPTs that are referred to by a single template URI may eventually contain different numbers of GLBs. It is not clear how many GLB template URIs there should be)

However, it's not entirely trivial to "merge arbitrary GLBs". It may be trivial in most cases that are relevant for Cesium/3D Tiles: There's usually a bunch of mesh primitives, materials, texture, and they can probably just be shoved into a single asset without hassle. But as soon as there is something like animations, morphing, multiple scenes, it is probably not possible to do this in an automated way (at least not without requiring many asumptions or some form of input about what the resulting structure should be).

If this was supposed to be tackled by merging the GLBs into one:

  • Should that merging functionality be in the tools, or in gltf-pipeline? The latter would make more sense - but in doubt, first, internal, (possibly incomplete) implementations of that functionality could be part of the tools, and moved to gltf-pipeline when they reach a reasonably complete/mature state
  • Similar to the 'Stages' for other upgrade operations, the "merging GLB" functionality may have different levels of completeness. Specifically: In the first version it may only work for geometry+textures. When the GLB contains animations or multiple scenes, this could cause a warning to be printed and leave the data unmodified.

@javagl
Copy link
Contributor

javagl commented Jun 12, 2023

Just to have a back-link here: There are some points about a possible upgrade from 3D Tiles 1.0 to 1.1 summarized in CesiumGS/3d-tiles#592 (comment) that may have to be taken into account here.

@javagl
Copy link
Contributor

javagl commented Aug 29, 2023

A short update: Most of the functionality for the generalized upgrade has been implemented in #41 and #52 .

The remaining (open) task is that of upgrading CMPT files. There are some caveats (as described in detail in the previous comment), so this may not be tackled immediately. But the approach of "merging the (resulting) GLBs" could be worth a try, because for the kind of data that usually appears as tile content, this could be implementable with glTF-Transform with reasonable effort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants