Lightweight glue scripts that support the workflow described in my Sora 2 For LookDev blog post. The goal is to turn Sora character plates into playable .glb
meshes with Stable Fast 3D, and preview the results instantly in a browser via Google Model Viewer.
sf3d_run.py
– convenience wrapper aroundstable-fast-3d/run.py
. Handles presets, output naming, metadata capture, and mesh stats.serve_sf3d.py
– tiny HTTP server that renders an auto-generated gallery of exported meshes using<model-viewer>
.output/
– created on demand. Each Stable Fast 3D run lands in a timestamped folder withmesh.glb
,input.png
, andmetadata.json
when available.
- Python 3.10+ on macOS or Linux.
- uv (used to invoke Stable Fast 3D exactly as the upstream repo documents).
- Stable Fast 3D checkout inside this directory:
git clone https://github.com/Stability-AI/stable-fast-3d.git
. - Follow the official setup guide for your hardware:
- Project docs: https://stable-fast-3d.github.io
- Upstream README (MPS for Apple Silicon, CUDA, CPU fallbacks): https://github.com/Stability-AI/stable-fast-3d#readme
- Direct link to the macOS MPS notes: https://github.com/Stability-AI/stable-fast-3d?tab=readme-ov-file#support-for-mps-for-mac-silicon-experimental
Once the upstream repo is installed, uv run python run.py --help
should work when executed inside stable-fast-3d/
.
-
Drop your Sora-generated character plate (PNG/JPG) anywhere relative to this repo.
-
Run the wrapper to build meshes. Example using the high-res triangle preset:
python sf3d_run.py ./renders/innkeeper_plate.png --preset triangle --texture-resolution 2048
What happens:
- Ensures
output/<timestamp>/
exists. - Calls
uv run python run.py ...
with your flags. - Records
metadata.json
with the command, inputs, mesh stats (faces/verts viatrimesh
), and environment flags such asPYTORCH_ENABLE_MPS_FALLBACK=1
.
Useful flags:
--preset {default,fast,highres,triangle,quad,quality}
to start from curated settings (explicit flags override).--texture-resolution
,--remesh-option {none,triangle,quad}
,--target-vertex-count
,--batch-size
for fine tuning.--device
to forcecuda
,mps
, orcpu
.--use-cpu
setsSF3D_USE_CPU=1
(skips GPU entirely).--no-mps-fallback
avoids addingPYTORCH_ENABLE_MPS_FALLBACK=1
for Apple Silicon.--dry-run
prints the composed command without launching Stable Fast 3D.
- Ensures
-
Preview the meshes locally:
python serve_sf3d.py --port 8000
Then open http://127.0.0.1:8000. Every
mesh.glb
beneathoutput/
is rendered in a grid. Ifmetadata.json
referencesinput.png
, the gallery exposes an “input image” button that opens it in a modal.
- The scripts assume Stable Fast 3D outputs include
mesh.glb
(default upstream behaviour) and, when present,input.png
created by the pipeline or copied manually. - Mesh stats rely on
trimesh
being available in the Stable Fast 3D environment. Installing the optional dependencies listed in the upstream README covers this. - Timestamped folders (e.g.
output/2025_03_02_7_45_pm/
) map cleanly onto the iterations described in the blog post—ideal for comparing prompt tweaks. - Labeling:
sf3d_run.py
writestimestamp_label
intometadata.json
;serve_sf3d.py
uses that label to name cards in the gallery.
stable-fast-3d repository not found
– ensure the upstream repo is cloned to./stable-fast-3d/
relative to these helper scripts.uv not found on PATH
– install uv and restart your shell. Installation instructions: https://docs.astral.sh/uv/getting-started/installation/.- Meshes missing in the gallery – confirm that runs output
mesh.glb
underoutput/
(the server skips directories without that file). - Apple Silicon hangs – revisit the official MPS guidance linked above and verify
PYTORCH_ENABLE_MPS_FALLBACK=1
is present (the wrapper sets it unless you disable it).
- Blog context: “Sora 2 For LookDev” (draft excerpt in the repository request). The article covers prompt engineering, happy accidents (character sheet plates), and how these scripts streamline pre-production lookdev.
- Stable Fast 3D demo on Hugging Face: https://huggingface.co/spaces/stabilityai/stable-fast-3d
- Google Model Viewer docs (for extending the gallery): https://modelviewer.dev/
If you build on this pipeline—automated rigging, engine exports, or batch comparisons—send a note or PR!