The Custom Unity Plugin for using Microsoft's TRELLIS. Use it to run a TRELLIS server (Docker recommended) and generate 3D assets (.glb) from text or images.
The shortest path to try the project locally:
- Build the Docker image from the repository root:
docker build -f trellis.dockerfile -t trellis:latest .- Run the server (example, interactive):
docker run --rm -it --gpus all -p 8000:8000 -v "${PWD}/outputs:/app/outputs" trellis:latestNotes for Windows: prefer running the container from WSL2 or Git Bash. PowerShell path mounts can be tricky on Docker Desktop; if you run into mount errors, use WSL2 or provide absolute paths.
- Smoke test from the host:
python trellis_test.py --server http://127.0.0.1:8000Add --images img1.png img2.png to exercise the multi-image endpoint. Successful runs return JSON containing GLB download URLs under /outputs/....
- NVIDIA GPU with >= 12GB VRAM for TRELLIS inference and GLB export (recommended). CUDA driver compatible with CUDA 11.8.
- Hugging Face access (see Authentication below) for model checkpoints.
- Two server setup options: Docker (recommended) or Native (local Python env).
- Docker Engine 24+ (Linux/WSL2 recommended).
- NVIDIA Container Toolkit (enables
--gpus all). See https://docs.nvidia.com/datacenter/cloud-native/.
- Python 3.10+.
- CUDA Toolkit 11.8 with
nvccand matching drivers. - GCC/G++ 11 (used to compile native extensions such as
diff_gaussian_rasterization). - PyTorch 2.4.0 + cu118 wheels and the ability to build PyTorch C++/CUDA extensions.
If you plan to run natively, follow the installation/conda environment in the upstream TRELLIS repo (this project relies on the same set of heavy dependencies).
TRELLIS downloads model checkpoints from Hugging Face. Check their quick start guide and use hf auth login to access.
Build and run examples are shown in Quickstart. The trellis.dockerfile installs PyTorch 2.4.0, xFormers, Kaolin, spconv, NVlabs nvdiffrast, diff_gaussian_rasterization and other TRELLIS dependencies. The first build is slow; subsequent builds use cache.
PowerShell mount tip: if ${PWD} doesn't map correctly for Docker Desktop, run Docker from WSL2 where $(pwd) works as in the examples above.
To run detached and give a name:
docker run -d --name trellis-server --gpus all -p 8000:8000 -v "${PWD}/outputs:/app/outputs" trellis:latest
docker logs -f trellis-serverStop / restart:
docker stop trellis-server
docker start trellis-serverThe first API request will trigger model loading to GPU (watch logs for [INFO] Loading Trellis …).
POST /submit/text→{"prompt": "..."}→{ "job_id": "..." }POST /submit/images→ multipart fieldfiles(one or more PNGs) →{ "job_id": "..." }GET /status/{job_id}→ job payload (status,type, optionalerror, optionalresult).GET /result/{job_id}(when status ==done) →- Text job:
{ "glb": "outputs/<job>/<slug>.glb" } - Multi-image:
{ "glbs": ["outputs/<job>/<slug0>.glb", ...] }
- Text job:
- Finished assets are served as static files under
/outputs/....
Jobs run in per-request threads; the in-memory job registry is cleared when the process restarts. Currently I only implement the multi-image and text to 3D apis.
- Import
TRELLIS4Unity.unitypackageinto your Unity project (Assets → Import Package → Custom Package). - Open the Trellis Generator Window (menu path provided in the package).
- Enter server URL (default
http://127.0.0.1:8000), set any generation parameters, and click Generate. - Generated assets are saved under
Assets/TrellisResultsas.glbfiles by default (can be customized in server or client code).
Last step before generation will be connecting your local Unity client with the remote machine if you run docker on a server.
ssh remote-machine-id -L 8000:localhost:8000Currently remote machine listens on local host at port 8000 and so is the local machine, this command will ensure the connection.
- Container fails to start / GPU not visible: ensure NVIDIA drivers + NVIDIA Container Toolkit are installed and Docker Desktop is configured to use WSL2.
- Model download failures: verify
HUGGINGFACE_HUB_TOKENorhuggingface-cli loginand network connectivity. - OOM/CUDA errors: try a larger GPU, reduce batch sizes, or run on a machine with more VRAM.
- Native extension compile errors: check
nvcc, CUDA include paths, GCC version, and Python headers. - Logs: use
docker logs -f <container>or watch the console whereuvicornruns.
- Jobs run in per-request threads; the in-memory job registry is cleared on process restart. For production, consider a persistent job queue/store.
- This repository currently implements the multi-image and text->3D APIs only.
MIT Licensed. Please open issues or PRs for bugs, improvements, or questions.

