Skip to content

Scriptwonder/TRELLIS-UnityPlugin

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TRELLIS-UnityPlugin

The Custom Unity Plugin for using Microsoft's TRELLIS. Use it to run a TRELLIS server (Docker recommended) and generate 3D assets (.glb) from text or images.

alt text

Quickstart (Docker)

The shortest path to try the project locally:

  1. Build the Docker image from the repository root:
docker build -f trellis.dockerfile -t trellis:latest .
  1. Run the server (example, interactive):
docker run --rm -it --gpus all -p 8000:8000 -v "${PWD}/outputs:/app/outputs" trellis:latest

Notes for Windows: prefer running the container from WSL2 or Git Bash. PowerShell path mounts can be tricky on Docker Desktop; if you run into mount errors, use WSL2 or provide absolute paths.

  1. Smoke test from the host:
python trellis_test.py --server http://127.0.0.1:8000

Add --images img1.png img2.png to exercise the multi-image endpoint. Successful runs return JSON containing GLB download URLs under /outputs/....

Requirements

  • NVIDIA GPU with >= 12GB VRAM for TRELLIS inference and GLB export (recommended). CUDA driver compatible with CUDA 11.8.
  • Hugging Face access (see Authentication below) for model checkpoints.
  • Two server setup options: Docker (recommended) or Native (local Python env).

Docker notes

Native (Developer) notes

  • Python 3.10+.
  • CUDA Toolkit 11.8 with nvcc and matching drivers.
  • GCC/G++ 11 (used to compile native extensions such as diff_gaussian_rasterization).
  • PyTorch 2.4.0 + cu118 wheels and the ability to build PyTorch C++/CUDA extensions.

If you plan to run natively, follow the installation/conda environment in the upstream TRELLIS repo (this project relies on the same set of heavy dependencies).

Hugging Face authentication / checkpoints

TRELLIS downloads model checkpoints from Hugging Face. Check their quick start guide and use hf auth login to access.

Docker Workflow (expanded)

Build and run examples are shown in Quickstart. The trellis.dockerfile installs PyTorch 2.4.0, xFormers, Kaolin, spconv, NVlabs nvdiffrast, diff_gaussian_rasterization and other TRELLIS dependencies. The first build is slow; subsequent builds use cache.

PowerShell mount tip: if ${PWD} doesn't map correctly for Docker Desktop, run Docker from WSL2 where $(pwd) works as in the examples above.

To run detached and give a name:

docker run -d --name trellis-server --gpus all -p 8000:8000 -v "${PWD}/outputs:/app/outputs" trellis:latest
docker logs -f trellis-server

Stop / restart:

docker stop trellis-server
docker start trellis-server

The first API request will trigger model loading to GPU (watch logs for [INFO] Loading Trellis …).

API

  • POST /submit/text{"prompt": "..."}{ "job_id": "..." }
  • POST /submit/images → multipart field files (one or more PNGs) → { "job_id": "..." }
  • GET /status/{job_id} → job payload (status, type, optional error, optional result).
  • GET /result/{job_id} (when status == done) →
    • Text job: { "glb": "outputs/<job>/<slug>.glb" }
    • Multi-image: { "glbs": ["outputs/<job>/<slug0>.glb", ...] }
  • Finished assets are served as static files under /outputs/....

Jobs run in per-request threads; the in-memory job registry is cleared when the process restarts. Currently I only implement the multi-image and text to 3D apis.

5. Unity Side

alt text

  1. Import TRELLIS4Unity.unitypackage into your Unity project (Assets → Import Package → Custom Package).
  2. Open the Trellis Generator Window (menu path provided in the package).
  3. Enter server URL (default http://127.0.0.1:8000), set any generation parameters, and click Generate.
  4. Generated assets are saved under Assets/TrellisResults as .glb files by default (can be customized in server or client code).

6. Server<->Client Connection

Last step before generation will be connecting your local Unity client with the remote machine if you run docker on a server.

ssh remote-machine-id -L 8000:localhost:8000

Currently remote machine listens on local host at port 8000 and so is the local machine, this command will ensure the connection.

Troubleshooting & tips

  • Container fails to start / GPU not visible: ensure NVIDIA drivers + NVIDIA Container Toolkit are installed and Docker Desktop is configured to use WSL2.
  • Model download failures: verify HUGGINGFACE_HUB_TOKEN or huggingface-cli login and network connectivity.
  • OOM/CUDA errors: try a larger GPU, reduce batch sizes, or run on a machine with more VRAM.
  • Native extension compile errors: check nvcc, CUDA include paths, GCC version, and Python headers.
  • Logs: use docker logs -f <container> or watch the console where uvicorn runs.

Limitations

  • Jobs run in per-request threads; the in-memory job registry is cleared on process restart. For production, consider a persistent job queue/store.
  • This repository currently implements the multi-image and text->3D APIs only.

License & Contributing

MIT Licensed. Please open issues or PRs for bugs, improvements, or questions.

About

The Custom Unity Plugin for Microsoft's TRELLIS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published