From b4dfa7be065876197cc1b710be54a81e09d172ca Mon Sep 17 00:00:00 2001 From: James Date: Sun, 17 May 2026 00:07:41 +0000 Subject: [PATCH] docs(lambda): add migration guide + non-Lambda Dockerfile example MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Two adopter-facing artifacts that close out Phase 6b's user-facing surface: - docs/deploy/migrating-to-hyperframes-lambda.mdx — side-by-side concept mapping for users coming from another one-command-deploy video renderer. Covers the verb mapping (deploy/render/progress/ destroy/sites/policies), composition format (plain HTML vs JSX), render config, and a handful of intentional differences (no HDR in distributed mode, no webm, gpu-mode=software requirement, fail-closed font fetch, local stack-state files, narrow-after- first-deploy IAM pattern). Closes with a migration checklist. Per repo convention, no competitor framework is named anywhere in the source — adopters self-identify. - examples/k8s-jobs/Dockerfile.example + README.md — reference Dockerfile for adopters who want to run distributed renders outside AWS Lambda. Bakes Node 22 + chrome-headless-shell + ffmpeg + the producer source. Deliberately not published to a registry; adopters build it themselves so Chrome / ffmpeg / producer versions stay pinned to the checkout they audited. The README documents the typical K8s Jobs orchestration shape that points adopters at packages/aws-lambda/src/handler.ts as the reference adapter. Migration guide registered under the existing Deploy group in docs.json. .gitignore extended to negate the new examples/k8s-jobs/ path the same way examples/aws-lambda/ is negated. No source code changes. --- .gitignore | 2 + .../migrating-to-hyperframes-lambda.mdx | 98 ++++++++++++++ docs/docs.json | 3 +- examples/k8s-jobs/Dockerfile.example | 123 ++++++++++++++++++ examples/k8s-jobs/README.md | 44 +++++++ 5 files changed, 269 insertions(+), 1 deletion(-) create mode 100644 docs/deploy/migrating-to-hyperframes-lambda.mdx create mode 100644 examples/k8s-jobs/Dockerfile.example create mode 100644 examples/k8s-jobs/README.md diff --git a/.gitignore b/.gitignore index 93f34ba23..983925dfd 100644 --- a/.gitignore +++ b/.gitignore @@ -71,6 +71,8 @@ examples/* # Tracked OSS examples — negations override the blanket `examples/*` ignore. !examples/aws-lambda !examples/aws-lambda/** +!examples/k8s-jobs +!examples/k8s-jobs/** packages/studio/data/ .desloppify/ .worktrees/ diff --git a/docs/deploy/migrating-to-hyperframes-lambda.mdx b/docs/deploy/migrating-to-hyperframes-lambda.mdx new file mode 100644 index 000000000..ea4802dd8 --- /dev/null +++ b/docs/deploy/migrating-to-hyperframes-lambda.mdx @@ -0,0 +1,98 @@ +--- +title: Migrating to HyperFrames Lambda +description: "Side-by-side mapping for adopters coming to HyperFrames from another one-command-deploy video renderer." +--- + +If you're already running a different framework that deploys a serverless video renderer with one command, the muscle memory translates cleanly: a single `deploy` provisions the stack, a single `render` starts a render, a single `progress` polls it, and a single `destroy` tears the stack down. This page maps your existing concepts onto HyperFrames' equivalents so you can spend the migration on the parts that actually differ instead of relearning the workflow. + +## Concept mapping + +| In your current framework you call... | In HyperFrames you call... | Notes | +|--------------------------------------|----------------------------|-------| +| One-shot deploy command | `hyperframes lambda deploy` | Builds `packages/aws-lambda/dist/handler.zip` and runs `sam deploy`. Idempotent. | +| One-shot site upload | `hyperframes lambda sites create ./project` | Content-addressed S3 key — re-uploads of an unchanged tree are skipped via a HeadObject 200. | +| Trigger a render | `hyperframes lambda render ./project --width 1920 --height 1080` | Returns immediately with a `renderId`; add `--wait` to stream per-chunk progress. | +| Poll render progress | `hyperframes lambda progress ` | Includes accrued cost in the same response. | +| Tear down | `hyperframes lambda destroy` | The S3 bucket is `Retain`'d — documented in the deploy guide. | +| Print/validate IAM policy | `hyperframes lambda policies user`/`role`/`validate` | Wire `validate` into CI to catch policy drift before the next deploy fails. | + +## Composition format + +If your current framework is **React-based**, you write JSX components, register them in a `Composition`, and the renderer compiles them at render time. + +In HyperFrames, **compositions are plain HTML files**. The `data-duration`, `data-width`, `data-height`, and `data-fps` attributes on the root element drive every render parameter. There is no JSX compilation step — what you write is what the browser renders. + +```html + + + +

Hello

+ + +``` + +For framework-agnostic animation, HyperFrames supports first-party adapters for GSAP, Anime.js, CSS keyframes, Lottie, Three.js, and the Web Animations API — covered in the [Concepts](/concepts) and per-skill docs. + +## Render config + +Most adopters' render config maps directly: + +| Concept | HyperFrames equivalent | Where it lives | +|---------|------------------------|----------------| +| `fps` | `--fps=30` (CLI) or `config.fps` (SDK) | 24, 30, 60 only — non-integer NTSC rationals are an in-process-only feature. | +| `width` / `height` | `--width` / `--height` flags, or `config.width` / `config.height` | Even integers ≤ 7680 (yuv420p parity). | +| `codec: 'h264' / 'h265'` | `--codec=h264` or `--codec=h265` (mp4 only) | h265 uses libx265 with closed-GOP keyint params so chunked concat-copy round-trips losslessly. | +| Output format | `--format=mp4 / mov / png-sequence` | Distributed mode refuses webm + HDR at plan time. | +| Quality preset | `--quality=draft / standard / high` | Maps onto ffmpeg encoder presets. | +| Chunk size in frames | `--chunk-size=240` (default 240) | ~8s at 30 fps; sized to fit Lambda's 15-min cap with headroom. | +| Max parallel chunks | `--max-parallel-chunks=16` (default 16) | Caps the Map state's fan-out. | +| Bitrate / CRF | `--bitrate=10M` or `--crf=18` | Mutually exclusive. | + +## What HyperFrames does differently + +A few areas where the contract is intentionally different from comparable frameworks. Surface them up front so the migration doesn't surprise you mid-deploy. + +### Deterministic Chrome path is mandatory + +HyperFrames refuses `data-gpu-mode="hardware"` in distributed mode — hardware GL is non-deterministic across chunk boundaries, and the per-chunk concat-copy assumes byte-level reproducibility. Compositions that opt into hardware GL in-process must drop it for Lambda renders. The Lambda handler trips a typed `BROWSER_GPU_NOT_SOFTWARE` non-retryable error on plan that's easy to catch in the progress output. + +### Font fetching fails closed + +`failClosedFontFetch` is default-on in distributed mode. A composition that references a `font-family` HyperFrames can't fetch will fail at plan time (`FONT_FETCH_FAILED`) rather than silently falling back to the OS default. If you currently lean on system-font fallbacks, list the fonts you need explicitly via `` or `@fontsource/*` imports. + +### No HDR (yet) + +`hdrMode: 'force-hdr'` is rejected at plan time. The v1.5 backlog covers HDR mp4 via `-bsf:v hevc_metadata` re-application; for now, HDR renders use the in-process renderer outside Lambda. + +### No webm distributed + +VP9 in matroska doesn't round-trip cleanly through concat-copy (the moov-atom keyframe assumptions don't hold). webm renders use the in-process renderer or accept a controlled re-encode at the assemble stage — coming in v1.5. The Lambda handler refuses webm with `FORMAT_NOT_SUPPORTED_IN_DISTRIBUTED` so the failure is loud. + +### State files are local by default + +`hyperframes lambda deploy` writes `/.hyperframes/lambda-stack-.json` so subsequent verbs don't re-derive the bucket / state-machine ARN. Two worktrees produce two distinct state files. If you need a shared default location across CI workers, symlink the directory or pass `--stack-name` explicitly on every call. + +### IAM policy is print-then-narrow + +The default policy doc emitted by `hyperframes lambda policies user/role` uses `Resource: "*"` because the CloudFormation stack creates new ARNs on every adopter's first deploy. After your first successful deploy, narrow the `Resource` to the deployed ARNs — they're predictable from the CFN outputs. CI users typically check the narrowed policy into source and run `hyperframes lambda policies validate ./infra/policy.json` as a pre-deploy gate. + +## Migration checklist + +1. **Inventory** the compositions you want to migrate. Filter out anything that needs HDR or webm — those stay on your current framework for now. +2. **Translate** each composition to plain HTML. The `[Concepts](/concepts)` page covers the data-attribute conventions; the `/hyperframes` skill (`npx skills add heygen-com/hyperframes`) makes Claude / Cursor / Codex aware of them too. +3. **Wire** the new composition into your build pipeline alongside the old one. HyperFrames doesn't need an external bundler — you can `npx hyperframes preview` against the HTML directly. +4. **Deploy** in a separate AWS account or with a `--stack-name=hyperframes-staging` first. Run a real render with `--wait`; verify the output bytes. +5. **Add the policy** to your CI. `hyperframes lambda policies user > infra/iam/hyperframes.json` then `hyperframes lambda policies validate infra/iam/hyperframes.json` on every PR. +6. **Cut over** by pointing your existing automation at the new render endpoint. Keep the old deployment alive until you've verified rolling renders for a release cycle, then `hyperframes lambda destroy` the staging stack and decommission the previous one. + +## Non-Lambda runtimes + +If you don't want Lambda specifically, the same `@hyperframes/producer/distributed` primitives run anywhere Node + Chrome + ffmpeg + S3 are available. A reference Dockerfile lives at `examples/k8s-jobs/Dockerfile.example` for adopters running on: + +- Google Cloud Run Jobs +- Azure Container Apps Jobs +- AWS ECS Fargate +- Kubernetes Jobs / Argo Workflows +- Plain Docker on a beefy VM + +Build it yourself — we don't publish a Docker image to a registry. The Dockerfile is documented inline and bakes Node 22 + chrome-headless-shell + ffmpeg + the producer at the version your checkout is on. diff --git a/docs/docs.json b/docs/docs.json index cb5a19452..de19051d5 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -210,7 +210,8 @@ { "group": "Deploy", "pages": [ - "deploy/aws-lambda" + "deploy/aws-lambda", + "deploy/migrating-to-hyperframes-lambda" ] } ] diff --git a/examples/k8s-jobs/Dockerfile.example b/examples/k8s-jobs/Dockerfile.example new file mode 100644 index 000000000..56d1510ab --- /dev/null +++ b/examples/k8s-jobs/Dockerfile.example @@ -0,0 +1,123 @@ +# HyperFrames distributed renderer — reference Dockerfile for non-Lambda runtimes. +# +# This image bakes Node 22, chrome-headless-shell, ffmpeg-static, and the +# `@hyperframes/producer/distributed` primitives. One image runs the +# Plan / RenderChunk / Assemble activities for any non-Lambda orchestrator: +# +# - Kubernetes Jobs (one Job per chunk, Argo Workflows on top) +# - AWS ECS Fargate (one task per chunk) +# - Google Cloud Run Jobs +# - Azure Container Apps Jobs +# - Plain `docker run` on a beefy VM +# +# We deliberately do NOT publish this image to a registry. The OSS contract +# is that adopters build it themselves — that way the Chrome / ffmpeg / +# producer versions are pinned to the source checkout they audited, not a +# floating tag we'd have to keep in sync with every release. +# +# Build from the repo root: +# +# docker build -t hyperframes-chunk-runner:local -f examples/k8s-jobs/Dockerfile.example . +# +# Run a chunk worker (an orchestrator script wraps this entry point): +# +# docker run --rm \ +# -e PRODUCER_HEADLESS_SHELL_PATH=/opt/chrome/chrome-headless-shell \ +# -v /tmp/hyperframes:/tmp/hyperframes \ +# hyperframes-chunk-runner:local \ +# node -e 'import("@hyperframes/producer/distributed").then(({renderChunk})=>renderChunk(...))' +# +# Lambda adopters use `packages/aws-lambda/dist/handler.zip` instead; +# this Dockerfile is the K8s / Cloud Run / ECS path. + +# ── Base ───────────────────────────────────────────────────────────────────── +# Debian bookworm-slim because the chrome-headless-shell dynamic-library set +# matches what we use in CI. Amazon Linux 2023 also works (Lambda's base +# image) but is harder to debug locally. +FROM node:22-bookworm-slim AS base + +# ── System deps ────────────────────────────────────────────────────────────── +# - ffmpeg: the producer's encode + audio mix +# - libfontconfig / libfreetype / fonts-liberation: Chrome text shaping +# - chromium-style ABI deps (the minimum set chrome-headless-shell needs): +# libnss3, libatk-bridge2.0, libdrm2, libgbm1, libxshmfence1, libxkbcommon0, +# libxcomposite1, libxdamage1, libxfixes3, libxrandr2, libasound2, +# libpangocairo-1.0-0 +# - tini: clean PID 1 for container teardown signals (Cloud Run / Fargate +# send SIGTERM at the 10-min grace boundary). +RUN apt-get update && apt-get install -y --no-install-recommends \ + ffmpeg \ + ca-certificates \ + fonts-liberation \ + libasound2 \ + libatk-bridge2.0-0 \ + libatk1.0-0 \ + libc6 \ + libcairo2 \ + libcups2 \ + libdbus-1-3 \ + libdrm2 \ + libfontconfig1 \ + libfreetype6 \ + libgbm1 \ + libglib2.0-0 \ + libnspr4 \ + libnss3 \ + libpango-1.0-0 \ + libpangocairo-1.0-0 \ + libxcomposite1 \ + libxdamage1 \ + libxfixes3 \ + libxkbcommon0 \ + libxrandr2 \ + libxshmfence1 \ + tini \ + tzdata \ + wget \ + xz-utils \ + && rm -rf /var/lib/apt/lists/* + +# ── Chrome ─────────────────────────────────────────────────────────────────── +# Use `@puppeteer/browsers` to fetch the same chrome-headless-shell version +# the producer pins. Keep the bun + chrome install in the build context so +# the runtime image is reproducible. +ENV CHROME_HEADLESS_SHELL_VERSION=131.0.6778.139 +ENV CHROME_DIR=/opt/chrome + +RUN mkdir -p "$CHROME_DIR" && \ + npm install --global @puppeteer/browsers@2.13.0 && \ + npx @puppeteer/browsers install "chrome-headless-shell@${CHROME_HEADLESS_SHELL_VERSION}" \ + --path "$CHROME_DIR" && \ + npm uninstall --global @puppeteer/browsers && \ + rm -rf /root/.npm + +ENV PRODUCER_HEADLESS_SHELL_PATH=${CHROME_DIR}/chrome-headless-shell/linux-${CHROME_HEADLESS_SHELL_VERSION}/chrome-headless-shell-linux64/chrome-headless-shell + +# ── HyperFrames ────────────────────────────────────────────────────────────── +# Copy the workspace bun-locked package set. We use bun in the build +# (matches the rest of the repo) but the runtime is plain Node — no bun +# is needed at run time. +WORKDIR /app +COPY package.json bun.lock ./ +COPY packages/aws-lambda/package.json packages/aws-lambda/ +COPY packages/core/package.json packages/core/ +COPY packages/engine/package.json packages/engine/ +COPY packages/producer/package.json packages/producer/ + +RUN npm install --global bun && \ + bun install --frozen-lockfile && \ + npm uninstall --global bun + +# Bring in the source. We're not building the producer's dist/ here — bun's +# workspace + tsx + esm resolution can run the producer straight from +# `packages/producer/src/**`. Adopters who want a built distribution can +# `bun run --cwd packages/producer build` against this image. +COPY packages/core ./packages/core +COPY packages/engine ./packages/engine +COPY packages/producer ./packages/producer + +# ── Runtime ────────────────────────────────────────────────────────────────── +ENTRYPOINT ["/usr/bin/tini", "--"] +# Default CMD prints the producer version + Chrome path; orchestrators +# typically override CMD with their per-chunk activity invocation. +CMD ["node", "-e", "console.log(JSON.stringify({producerVersion: require('./packages/producer/package.json').version, chromePath: process.env.PRODUCER_HEADLESS_SHELL_PATH}))"] diff --git a/examples/k8s-jobs/README.md b/examples/k8s-jobs/README.md new file mode 100644 index 000000000..a97c92f7f --- /dev/null +++ b/examples/k8s-jobs/README.md @@ -0,0 +1,44 @@ +# K8s / Cloud Run / ECS reference Dockerfile + +This directory ships a reference `Dockerfile.example` for adopters who want to run HyperFrames distributed renders **outside AWS Lambda**. The image bakes Node 22 + `chrome-headless-shell` + `ffmpeg` + the producer source, and works on Kubernetes Jobs, Argo Workflows, Cloud Run Jobs, ECS Fargate, or plain `docker run`. + +We do **not** publish this image to a registry — the OSS contract is that adopters build it themselves so Chrome / ffmpeg / producer versions stay pinned to the source checkout they audited, not a floating tag we'd have to keep in sync with every release. + +## Build + +From the repo root: + +```bash +docker build -t hyperframes-chunk-runner:local -f examples/k8s-jobs/Dockerfile.example . +``` + +The build pulls `chrome-headless-shell` via `@puppeteer/browsers` and installs Debian system packages for the Chromium ABI deps. Expect a ~1.2 GB compressed image; ~3 GB unpacked. + +## Use + +The producer's distributed primitives are pure functions over local paths. Wire them into your orchestrator however you like: + +```ts +import { plan, renderChunk, assemble } from "@hyperframes/producer/distributed"; + +// Controller-side: produce a self-contained planDir + content-addressed planHash. +const planResult = await plan(projectDir, config, planDir); + +// Worker-side: render one chunk (byte-identical on retry for the same input). +const chunk = await renderChunk(planDir, chunkIndex, outputChunkPath); + +// Controller-side: stitch chunks into the final deliverable. +await assemble(planDir, chunkPaths, audioPath, outputPath); +``` + +A typical Kubernetes Jobs orchestration: + +1. **Controller** runs a one-shot Job that mounts the project directory + calls `plan()`. Uploads the resulting `planDir/` to your shared storage (S3, GCS, PVC, …). +2. **Per-chunk** Jobs (one per chunk index) download the planDir, call `renderChunk(planDir, i, output)`, upload the output. Argo Workflows' `withSequence` is a natural fit. +3. **Assembler** Job downloads the planDir + every chunk output, calls `assemble(...)`, uploads the final mp4 / mov. + +The AWS Lambda implementation in `packages/aws-lambda/src/handler.ts` is one concrete adapter — read it as a reference for the per-activity event shape. + +## Lambda? + +If you want AWS Lambda specifically, use `hyperframes lambda deploy` instead — it ships a turnkey deployment. See [docs/deploy/aws-lambda.mdx](../../docs/deploy/aws-lambda.mdx).