Skip to content

IAnMove/HYworld2

Repository files navigation

HY-World 2.0 for Pinokio

This launcher installs and runs Tencent's HY-World-2.0 repository inside Pinokio.

As of April 18, 2026, the upstream repository's public release exposes the WorldMirror 2.0 reconstruction stack and its Gradio demo. The full text/image-to-world generation pipeline is still marked "Coming Soon" upstream, so this launcher intentionally targets the open-sourced reconstruction workflow instead of promising unreleased components.

What This Launcher Does

  • Clones Tencent-Hunyuan/HY-World-2.0 into app/
  • Creates the Python environment in app/env
  • Installs the upstream CUDA 12.4 PyTorch stack (torch==2.4.0, torchvision==0.19.0)
  • Installs the rest of the repository dependencies, including the platform-specific gsplat wheel
  • Installs flash-attn
    • Linux: official prebuilt wheel from Dao-AILab/flash-attention releases
    • Windows: prebuilt community wheel matched to torch 2.4.0 + cu124 + cp310, used to avoid the known source-build failure path on Windows
  • Launches the upstream Gradio demo with python -m hyworld2.worldrecon.gradio_app --host 127.0.0.1 --port <dynamic-port>

Requirements

  • NVIDIA GPU recommended by upstream
  • CUDA 12.4-compatible environment recommended by upstream
  • Enough VRAM and RAM for WorldMirror 2.0
  • Internet access on first launch for Hugging Face model downloads

On Windows, the original source build failed in flash-attn during ninja/CUDA compilation, so the launcher now uses a prebuilt Windows wheel instead of compiling flash-attn locally. If installation still fails, check the Pinokio logs/api output first.

How To Use

  1. Open the app in Pinokio.
  2. Click Install.
  3. Wait for dependency installation to finish.
  4. Click Start WorldMirror 2.0.
  5. Open the Open Web UI tab once Pinokio detects the local URL.
  6. Upload images or a video, then click Reconstruct in the Gradio app.

Generated demo inputs and outputs are created under the cloned upstream repo, typically in paths such as app/gradio_demo_output/ and app/inference_output/.

Automation

The stable automation surfaces in the upstream repo are the Python API and CLI pipeline. The Gradio app is primarily for interactive browser use and does not define custom named HTTP inference routes in the repository.

JavaScript

Use Node.js to call the upstream CLI from inside the cloned repo:

import { spawn } from "node:child_process";

const proc = spawn(
  "python",
  [
    "-m",
    "hyworld2.worldrecon.pipeline",
    "--input_path",
    "examples/worldrecon/realistic",
    "--output_path",
    "inference_output",
    "--no_interactive"
  ],
  { cwd: "app" }
);

proc.stdout.on("data", (chunk) => process.stdout.write(chunk));
proc.stderr.on("data", (chunk) => process.stderr.write(chunk));

Python

Use the upstream pipeline directly:

from hyworld2.worldrecon.pipeline import WorldMirrorPipeline

pipeline = WorldMirrorPipeline.from_pretrained("tencent/HY-World-2.0")
result_dir = pipeline(
    "examples/worldrecon/realistic",
    output_path="inference_output",
)

print(result_dir)

Curl

For HTTP automation, the only stable route you should rely on from the launcher is Gradio's runtime config endpoint after start.js is already running. Replace <PINOKIO_PORT> with the port shown in Pinokio's Open Web UI URL.

curl http://127.0.0.1:<PINOKIO_PORT>/config

For actual reconstruction jobs, prefer the Python API or CLI examples above.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors