This launcher installs and runs Tencent's HY-World-2.0 repository inside Pinokio.
As of April 18, 2026, the upstream repository's public release exposes the WorldMirror 2.0 reconstruction stack and its Gradio demo. The full text/image-to-world generation pipeline is still marked "Coming Soon" upstream, so this launcher intentionally targets the open-sourced reconstruction workflow instead of promising unreleased components.
- Clones
Tencent-Hunyuan/HY-World-2.0intoapp/ - Creates the Python environment in
app/env - Installs the upstream CUDA 12.4 PyTorch stack (
torch==2.4.0,torchvision==0.19.0) - Installs the rest of the repository dependencies, including the platform-specific
gsplatwheel - Installs
flash-attn- Linux: official prebuilt wheel from
Dao-AILab/flash-attentionreleases - Windows: prebuilt community wheel matched to
torch 2.4.0 + cu124 + cp310, used to avoid the known source-build failure path on Windows
- Linux: official prebuilt wheel from
- Launches the upstream Gradio demo with
python -m hyworld2.worldrecon.gradio_app --host 127.0.0.1 --port <dynamic-port>
- NVIDIA GPU recommended by upstream
- CUDA 12.4-compatible environment recommended by upstream
- Enough VRAM and RAM for WorldMirror 2.0
- Internet access on first launch for Hugging Face model downloads
On Windows, the original source build failed in flash-attn during ninja/CUDA compilation, so the launcher now uses a prebuilt Windows wheel instead of compiling flash-attn locally. If installation still fails, check the Pinokio logs/api output first.
- Open the app in Pinokio.
- Click
Install. - Wait for dependency installation to finish.
- Click
Start WorldMirror 2.0. - Open the
Open Web UItab once Pinokio detects the local URL. - Upload images or a video, then click
Reconstructin the Gradio app.
Generated demo inputs and outputs are created under the cloned upstream repo, typically in paths such as app/gradio_demo_output/ and app/inference_output/.
The stable automation surfaces in the upstream repo are the Python API and CLI pipeline. The Gradio app is primarily for interactive browser use and does not define custom named HTTP inference routes in the repository.
Use Node.js to call the upstream CLI from inside the cloned repo:
import { spawn } from "node:child_process";
const proc = spawn(
"python",
[
"-m",
"hyworld2.worldrecon.pipeline",
"--input_path",
"examples/worldrecon/realistic",
"--output_path",
"inference_output",
"--no_interactive"
],
{ cwd: "app" }
);
proc.stdout.on("data", (chunk) => process.stdout.write(chunk));
proc.stderr.on("data", (chunk) => process.stderr.write(chunk));Use the upstream pipeline directly:
from hyworld2.worldrecon.pipeline import WorldMirrorPipeline
pipeline = WorldMirrorPipeline.from_pretrained("tencent/HY-World-2.0")
result_dir = pipeline(
"examples/worldrecon/realistic",
output_path="inference_output",
)
print(result_dir)For HTTP automation, the only stable route you should rely on from the launcher is Gradio's runtime config endpoint after start.js is already running. Replace <PINOKIO_PORT> with the port shown in Pinokio's Open Web UI URL.
curl http://127.0.0.1:<PINOKIO_PORT>/configFor actual reconstruction jobs, prefer the Python API or CLI examples above.