An agent harness that uses a small Typer CLI to submit ComfyUI API prompts, stream async progress via WebSocket, and download generated outputs.
- Reads ComfyUI server URL from
config.json - Submits prompt JSON to
POST /prompt - Supports runtime overrides for key fields:
- positive text prompt
- mesh seed
- target face count
- file name prefix
- texture seed
- Streams progress via
GET /ws?clientId=...when a client ID is available - Waits for completion using
GET /history/{prompt_id}(with queue polling fallback) - Auto-downloads
.glboutput viaGET /view
- Python +
uv - ComfyUI server reachable from this machine
- A ComfyUI build with required nodes/models installed and running at
server_url, such as:michaelgold/comfy3d, or- another ComfyUI setup that includes qwen-image-2512 and Trellis2
cd /Users/mg/.openclaw/workspace/comfy-prompt-cli
uv sync
uv run comfy-prompt-cli config init --forceDefault config.json:
{
"server_url": "http://localhost:8188/"
}uv run comfy-prompt-cli healthuv run comfy-prompt-cli text-to-image \
--prompt "A cinematic portrait of a fox in rain"uv run comfy-prompt-cli image-text-to-image \
--image path/to/input.png \
--prompt "Put this character in a futuristic city at sunset"uv run comfy-prompt-cli image-to-glb \
--image path/to/input.png \
--mesh-seed 12345 \
--target-face-num 800000 \
--filename-prefix my_mesh \
--texture-seed 67890uv run comfy-prompt-cli rig-glb \
--mesh wrestler_multi_trellis.glb \
--glb-name riggeduv run comfy-prompt-cli text-to-glb \
--prompt "A stylized wrestler character, full body, neutral pose"uv run comfy-prompt-cli text-to-rigged-glb \
--prompt "A stylized wrestler character, full body, neutral pose"uv run comfy-prompt-cli send path/to/prompt_api.jsonuv run comfy-prompt-cli send path/to/prompt_api.json \
--prompt "A 3d cartoon astronaut in a t-pose" \
--mesh-seed 12345 \
--target-face-num 800000 \
--filename-prefix astronaut \
--texture-seed 67890uv run comfy-prompt-cli wait <prompt_id> --out-dir downloadsIf you want live /ws progress for an already-submitted prompt, pass the same client_id used when submitting:
uv run comfy-prompt-cli wait <prompt_id> --client-id <client_id> --out-dir downloadsuv run comfy-prompt-cli run path/to/prompt_api.json \
--prompt "A 3d cartoon astronaut in a t-pose" \
--mesh-seed 12345 \
--target-face-num 800000 \
--filename-prefix astronaut \
--texture-seed 67890 \
--out-dir downloadsuv run comfy-prompt-cli send path/to/prompt_api.json --dry-run# Text -> image
uv run comfy-prompt-cli text-to-image --prompt "A 3d cartoon astronaut in a t-pose"
# Image + text -> image
uv run comfy-prompt-cli image-text-to-image \
--image path/to/input.png \
--prompt "Make this look like a fashion editorial"
# Image -> GLB
uv run comfy-prompt-cli image-to-glb \
--image path/to/input.png \
--mesh-seed 12345 \
--target-face-num 800000 \
--filename-prefix astronaut \
--texture-seed 67890send expects ComfyUI API prompt JSON.
Accepted:
- direct API prompt object (
{"node_id": {...}}), or - wrapper with top-level
promptkey ({"prompt": {...}})
Rejected:
- UI workflow export format with top-level
nodes+links
If you pass workflow export JSON, CLI will show a clear error telling you to export/copy API prompt JSON.
Image-based commands (image-text-to-image, image-to-glb) accept a local image path.
The CLI uploads that image to ComfyUI input storage before submitting the workflow.
examples/qwen_image_2512.json
Text-to-image API prompt workflowexamples/qwen_image_edit_2511.json
Image+text editing API prompt workflowexamples/img_to_trellis2.json
Image-to-GLB API prompt workflowexamples/qwen_to_trellis2.json
Text-to-GLB workflow template
- Connection error: verify
config.jsonserver_url, host reachability, and ComfyUI port. - Upload error for image commands: verify your image path exists and ComfyUI supports
POST /upload/image. - No GLB found: workflow may not output
.glb; check/history/{prompt_id}outputs. - Large GLB can’t be sent over Telegram: Telegram may reject with
413 Request Entity Too Large; use local path or reduce mesh/texture settings.