Add SPZ v4 (NGSP / ZSTD multi-stream) read and write support#332
Add SPZ v4 (NGSP / ZSTD multi-stream) read and write support#332udwinj wants to merge 2 commits intosparkjsdev:mainfrom
Conversation
SPZ v4 files use a 32-byte NGSP header with per-attribute ZSTD-compressed
streams instead of a single gzip-wrapped payload (v1-v3). This adds
@bokuweb/zstd-wasm for ZSTD codec support (works in all browsers via
WASM, no CompressionStream("zstd") browser dependency) and updates both
the reader and writer to handle v4.
Read path:
- getSplatFileType: detect v4 files that start with NGSP magic directly
(not gzip-wrapped) and return SplatFileType.SPZ
- SpzReader: detect v4 in constructor via magic bytes; parse the 32-byte
header and decompress all attribute streams upfront in parseHeader()
- SpzReader: unified read() abstraction in parseSplats() so v3 and v4
share identical decode logic
- SpzReader: extend smallest-three quaternion path to version >= 3
(was === 3), since v4 uses the same encoding as v3
- SpzReader: legacy v1-v3 gzip path is unchanged
Write path:
- SPZ_VERSION bumped from 3 to 4
- SpzWriter rewritten to keep per-attribute Uint8Array buffers and emit
the v4 file layout: [32-byte header][TOC][concatenated ZSTD streams]
- Setter API (setCenter, setAlpha, setRgb, setScale, setQuat, setSh) is
unchanged, so transcodeSpz and other callers don't need updates
- finalize() ZSTD-compresses each attribute stream independently and
assembles the output, mirroring the C++ saveSpz() reference encoder
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The viewer's load path goes through the spark-worker-rs WASM module, which uses spark-lib's Rust SpzDecoder. Without this change, v4 files fail with "Invalid gzip header" even though the TS SpzReader handles them, because the worker never invokes the TS path. This adds a parallel v4 path to SpzDecoder that mirrors the C++ reference implementation: - New SpzFormat enum (Unknown / Gzip / Ngsp). The decoder detects which on the first 4 bytes of input — NGSP magic = v4, gzip magic = legacy. - For v4: accumulate raw bytes, parse the 32-byte NgspFileHeader, walk the TOC (numStreams × [u64 compressedSize][u64 uncompressedSize]), ZSTD-decompress each attribute stream with ruzstd, concatenate the decompressed bytes in stream order, then run the existing per-stage state machine (Centers/Alphas/Rgb/Scales/Quats/Sh). - For v1-v3: gzip path is unchanged. - Smallest-three quaternion branch now triggers on version >= 3, since v4 uses the same encoding as v3. decoder.rs: MultiDecoder routes files starting with NGSP magic directly to SpzDecoder (in addition to the existing gzip-wrapped detection). Dependencies: - ruzstd 0.7 (pure-Rust ZSTD decoder; works in WASM with no C bindings) The Rust SpzEncoder is intentionally untouched — only the build-lod CLI uses it. SPZ writing from spark.js goes through the TypeScript SpzWriter, which already produces v4 files. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Update: Rust WASM decoder now also supports SPZ v4After initial testing, we found that the viewer's load path ( A second commit adds full v4 support to the Rust decoder. What the second commit changesNew dependency —
|
Summary
Adds support for SPZ v4 to spark — both reading and writing — bringing parity with the latest
nianticlabs/spzreference encoder.SPZ v4 replaces the single gzip-wrapped payload (v1–v3) with a 32-byte
NGSPheader followed by per-attribute ZSTD-compressed streams. The wire format is identical to upstream'ssaveSpz()so files round-trip cleanly between the C++ encoder and spark.Changes
Dependencies
@bokuweb/zstd-wasm(~50 KB WASM blob, lazy-loaded). Used for both ZSTD compression and decompression. NativeCompressionStream("zstd")was considered but dropped because it isn't yet supported in Firefox or Safari.src/SplatLoader.tsgetSplatFileTyperecognizes theNGSPmagic at offset 0 (v4) in addition to the existing gzip-wrapped detection (v1–v3).src/spz.ts— read pathSpzReaderdetects v4 in the constructor by inspecting the first 4 bytes; legacy files continue to flow throughGunzipReaderunchanged.parseHeader()parses the 32-byte NGSP header, awaits ZSTD WASM init, and decompresses every attribute stream up front intov4Streams: Uint8Array[].parseSplats()uses a smallread()abstraction so v3 (gzip stream) and v4 (pre-decompressed buffers) share the same decode logic.version >= 3(was=== 3) since v4 uses the same encoding as v3.src/spz.ts— write pathSPZ_VERSIONbumped to4.SpzWriternow stores each attribute in its ownUint8Array(positions, alphas, colors, scales, rotations, sh) and assembles[32-byte header][TOC][ZSTD streams]infinalize().setCenter/setAlpha/setRgb/setScale/setQuat/setSh) is unchanged, sotranscodeSpzand other callers don't need updates.[u64 compressedSize LE][u64 uncompressedSize LE], matching the reference encoder.Backward compatibility
Known gaps (out of scope)
FlagHasExtensions = 0x2): not read or written. Reader correctly skips over them via thetocByteOffsetfield, so files containing extensions still load.SH_MAX_DEGREE = 4): pre-existing spark limitation —SH_DEGREE_TO_VECSonly goes up to 3. Files withshDegree == 4already failed to load before this PR; behavior is unchanged.