Skip to content
No description, website, or topics provided.
C++ Python CMake Other
Branch: master
Clone or download
Pull request Compare This branch is 6 commits behind google:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
common
convert
gltf
process
testdata
tools
ufg_plugin
usd_from_gltf
CMakeLists.txt
CONTRIBUTING.md
LICENSE
README.md
ufg-config.cmake

README.md

USD from glTF

Library, command-line tool, and import plugin for converting glTF models to [USD] (https://graphics.pixar.com/usd/docs/index.html) formatted assets for display in AR Quick Look.

Please note that this is not an officially supported Google product.

This is a C++ native library that serves as an alternative to existing scripted solutions. Its main benefits are improved compatibility with iOS and conversion speed (see Compatibility and Performance). It is treating USDZ as a transmission format rather than an interchange format. For more information about transmission and interchange file formats see here.

TLDR: Install it, then convert with: usd_from_gltf <source.gltf> <destination.usdz>

Background

glTF is a transmission format for 3D assets that is well suited to the web and mobile devices by removing data that is not important for efficient display of assets. USD is an interchange format that can be used for file editing in Digital Content Creation tools (ie. Maya).

However, iOS Quick Look supports displaying USDZ files with a subset of the USD file specification. This tool converts glTF files to USDZ for display in Quick Look, attempting to emulate as much of glTF’s functionality as possible in iOS Quick Look runtime.

The emulation process is lossy. For example, to support double sided glTF materials, the geometry is doubled. This allows the converted glTF to display correctly on iOS, but importing back into a DCC application will not be the same data as the original source file.

This tool specifically addresses the use case of converting a file from glTF->USDZ->QuickLook. Going DCC->glTF optimizes the asset for runtime viewing, and may lose information if the converted USDZ is imported back in the DCC tool, like subdivision surfaces.

When converting glTF->USD->DCC, Apple's USDPython tools will better preserve the data in the glTF file at the cost of not having the same compatibility with existing versions of iOS Quick Look.

Installation Steps

  • Download and build USD. See the associated README for prerequisites and build steps. Refer to USD installation directory as {USD}.

  • Install NASM.

    • (Linux) sudo apt-get install nasm
    • (OSX) brew install nasm (requires Homebrew)
    • (Windows) Use the installer for the latest stable release.
  • Install PIL.

    • pip install Pillow
  • Download usd_from_gltf source to {UFG_SRC}.

  • Install to {UFG_BUILD} (with optional test data):

    python {UFG_SRC}/tools/ufginstall/ufginstall.py {UFG_BUILD} {USD} --testdata
    
  • (Linux/OSX) Set LD_LIBRARY_PATH to the USD and usd_from_gltf lib directories. See ufginstall script output for the paths.

  • (Optional) Add executable to PATH. See ufginstall script output for the exe path.

  • (Optional) Build test data. See ufginstall script output for the ufgtest.py command.

  • (Optional) Set PXR_PLUGINPATH_NAME so the glTF import plugin is available in Usdview. See ufginstall script output for the path.

Using the Command-Line Tool

The command-line tool is called usd_from_gltf and is in the {UFG_BUILD}/bin directory. Run it with (use --help for full documentation):

usd_from_gltf <source.gltf> <destination.usdz>

Batch Converting and Testing

The library contains {UFG_SRC}/tools/ufgbatch/ufgbatch.py to facilitate batch conversion. Run it with (use --help for full documentation):

python {UFG_SRC}/tools/ufgbatch/ufgbatch.py my_tests.csv --exe "{UFG_BUILD}/bin/usd_from_gltf"

Each input CSV contains a list of conversion tasks of the form:

name,path/to/input.gltf,dir/to/output/usd[, optional usd_from_gltf flags]

The library also contains {UFG_SRC}/tools/ufgbatch/ufgtest.py to facilitate testing, and preview deployment. Run it with (use --help for full documentation):

python {UFG_SRC}/tools/ufgbatch/ufgtest.py my_tests.csv --exe "{UFG_BUILD}/bin/usd_from_gltf"

For development and testing, the ufgtest.py has a couple additional features:

  • Golden file diffs. After a build completes, the tool compares built files against files in a known-good 'golden' directory. This is useful for determining if changes to the library affect generated data. This can be disabled with --nodiff.
  • Preview web site deployment. This copies changed USDZ files (different from golden) to a directory and generates an index.html to view the listing in a browser, compatible with QuickLook on iOS. This can be disabled with --nodeploy.

Using the Library

The converter can be linked with other applications using the libraries in {UFG_BUILD}/lib/ufg. Call ufg::ConvertGltfToUsd to convert a glTF file to USD.

Using the Import Plugin

The plugin isn't necessary for conversion, but it's useful for previewing glTF files in UsdView.

To use it, set the PXR_PLUGINPATH_NAME environment variable to the directory containing plugInfo.json. See ufginstall script output for the path.

Compatibility

While USD is a general-purpose format, this library focuses on compatibility with AR Quick Look. The AR Quick Look renderer only supports a subset of the glTF 2.0 specification though, so there are several limitations. Where reasonable, missing features are emulated in an effort to reproduce glTF files as faithfully as possible on iOS. The emulation can be lossy process and the output is not well suited as an interchange format.

Key Features

Emulated Functionality for AR Quick Look

Several rendering features of glTF are not currently supported in AR Quick Look, but they are emulated where reasonable. The emulated features are:

  • Texture channel references. USD supports this, but currently AR Quick Look requires distinct textures for the roughness, metallic, and occlusion channels. The converter splits channels into separate textures and recompresses them as necessary.
  • Texture color scale and offset. These are emulated by baking the scale and offset into the texture. This potentially increases the output size if a texture is referenced multiple times with different settings.
  • Texture UV transforms. These are emulated by baking transforms into vertex UVs. Note that because they are baked into the single UV set for the model, so different textures can't use different transformations on the same mesh.
  • Specular workflow. Currently AR Quick Look does not support this. The converter with generate metallic+roughness textures as an approximation.
  • Arbitrary asset sizes. AR Quick Look has some limit (empirically, around 200MB) to the decompressed size, and will fail to load models larger than this. The converter works around this by globally resizing textures to fit within the configured limit.
  • Unlit materials. The converter emulates these with a pure emissive material. This mostly works, but there are some differences due a rim light factor in the AR Quick Look renderer.
  • sRGB emissive texture. AR Quick Look incorrectly treats the emissive texture as linear rather than sRGB, so the converter works around this by converting to linear.
  • Alpha cutoff. The converter works around this by baking alpha values to 0 or 1 for alpha cutoff materials. This will increase the output size if the texture is referenced by materials with different cutoff state. Also, due to the lack of transparency sorting, alpha cutoff materials may exhibit sorting errors.
  • Double-sided geometry. The converter works around this by duplicating geometry.
  • Normal-map normalization. AR Quick Look does not normalize normal-map normals, causing incorrect lighting for some textures. The converter explicitly renormalizes normal-map textures to work around this.
  • Inverted transforms. AR Quick Look will incorrectly face-cull for inverted geometry, so the converter works around this by baking the reversed polygon winding into the mesh where necessary.
  • Quaternion-based rigid animation. This is not supported in iOS 12. The converter works around this by converting to Euler, which may suffer from Gimbal lock issues. To reduce error, the converter bakes Euler keys at a higher frequency, which can increase animation size.
  • Spherical linear (slerp) interpolation for rotations. All interpolation is linear, so blends between matrix or quaternion keys are incorrect and can induce scale changes. The converter works around this by converting to Euler for rigid animation, and by baking quaternion keys at higher frequency for skinned animation.
  • Per-joint animation channels. Skinning does not make use of independent animation channels, so the converter expands source channels to a grid of (joints * keys) elements. Animations will be significantly larger than their glTF source for complex skeletons.
  • Multiple skeletons. AR Quick Look only supports a single skeleton, so the converter emulates this by merging multiple skeletons into one (at some cost to animation size).
  • Step and cubic animation interpolation modes. The converter emulates these by baking them to linear (again, at a cost of animation size).
  • Vertex quantization/compression. All vertex components are converted to full float precision, and Draco-compressed meshes are decompressed.

Features Unsupported by AR Quick Look

These features are not supported in AR Quick Look, and cannot be reasonably supported by the converter.

  • Vertex colors.
  • Morph targets and vertex animation.
  • Texture filter modes. All textures are sampled linearly, with mipmapping.
  • Clamp and mirror texture wrap modes. All textures use repeat mode.
  • Cameras.
  • Shadow animation. Shadows are generated from the first frame.
  • Transparent shadows. This is particularly noticable for transparent geometry near the ground, which will appear very dark due to shadow falloff.
  • Multiple UV sets. The converter works around this by disabling textures using secondary UV sets, which works best for the most common-use case: AO.
  • Multiple animations. The converter just exports a single animation.
  • Multiple scenes. The converter just exports a single scene.
  • Transparency sorting. Overlapping transparent surfaces are likely to look incorrect.
  • Skinned animation for vertex normals. Lighting will look incorrect for skinned models, which is especially noticeable for highly reflective surfaces (it will have a painted-on appearance). The converter attempts to mitigate this by baking normals to the first frame of animation, but it will still look incorrect.

Rendering Differences Between glTF and AR Quick Look

The AR Quick Look renderer does not precisely match the rendering model described by the glTF spec, but it is reasonably close. There are a few exceptions:

  • Occlusion (AO) is applied to the output color rather than ambient, so shadowed areas look a lot darker than in glTF. In some cases, this can cause the model to show up completely black, so the converter works around this by disabling all-black occlusion.
  • Transparent areas appear dark and washed-out, seemingly due to misapplication of premultiplied alpha.
  • Z-fighting with shadow geometry at zero height.

Potential solutions to unsupported issues

  • Cameras, vertex animation, and vertex colors are supported by the USD spec but not currently by AR Quick Look. These should be added in the interest of completeness and future-proofing.
  • Emulate texture mirror wrap mode by mirroring the texture. This is simple, but can increase texture size up to 4x.
  • Emulate texture clamp wrap mode by clipping UVs. This involves relatively complicated clipping, but should not have a significant impact on model size.
  • Emulate vertex colors by baking them into the texture. This is difficult to do generally because it may involve re-uv-mapping the model - something better left to content authors. It can be simplified for certain special cases, though (e.g. untextured models with only vertex colors can use a simple color atlas).
  • Combine multiple UV sets. This is difficult to do because it requires re-uv-mapping the model - something better left to content authors.

Performance

usd_from_gltf is roughly 10-15x faster than current alternatives.

The bulk of the conversion time is spent in image processing and recompression, necessary for emulating otherwise unsupported functionality in AR Quick Look.

Primary Optimizations

  • Implemented in native C++.
  • Supports multi-process model conversion through the ufgbatch.py script.
  • Can generate both USDA and USDZ files in a single pass.

Benchmarks

Each benchmark was run 3 times on a Xeon E5-1650 @ 3.50GHz with 6 cores, 2x hyperthreaded for 12 hardware threads.

Converting 55 glTF sample models to USDZ:

  • usd_from_gltf, 1 process: 20.1, 19.9, 19.9 (average: 20 seconds)
  • usd_from_gltf, 12 processes: 6.9, 6.9, 6.7 (average: 6.8 seconds)

Converting 28 complex skinned and animated glTF models to USDZ:

  • usd_from_gltf, 1 process: 22.6, 22.5, 22.6 (average: 22.6 seconds)
  • usd_from_gltf, 12 processes: 5.0, 5.3, 5.1 (average: 5.2 seconds)
You can’t perform that action at this time.