Skip to content

docs: Add video generation support to Vercel AI SDK documentation#571

Open
promptless[bot] wants to merge 2 commits intomainfrom
promptless/ai-sdk-video-generation
Open

docs: Add video generation support to Vercel AI SDK documentation#571
promptless[bot] wants to merge 2 commits intomainfrom
promptless/ai-sdk-video-generation

Conversation

@promptless
Copy link
Contributor

@promptless promptless bot commented Mar 23, 2026

Open this suggestion in Promptless to view citations and reasoning process

Documents the new experimental_generateVideo function in the Vercel AI SDK integration, supporting 15 video models for text-to-video and image-to-video generation. Includes usage examples, provider options, and a complete model reference table.

Trigger Events

runpod/ai-sdk-provider PR #57: feat(video): add video generation support with 15 models

Promptless Research (5 files, 1 GitHub PR)

  • Reviewed runpod/ai-sdk-provider#57
  • Viewed .long_term_context/product_knowledge/product_overview.md
  • Viewed .long_term_context/doc_workflow/client_instructions.md
  • Viewed public-endpoints/ai-sdk.mdx
  • Viewed .long_term_context/style/client_style_guide.md
  • Viewed tutorials/public-endpoints/text-to-video-pipeline.mdx

Agent Response

I've created a documentation suggestion for the video generation feature in PR #57. Here's a summary:

Suggestion created: Add video generation support to Vercel AI SDK documentation

Changes made to public-endpoints/ai-sdk.mdx:

  1. Updated intro and feature list - Added video generation alongside text, streaming, and image generation in the page introduction and "Why use the Vercel AI SDK?" section.

  2. Added Video generation section with:

    • Text-to-video example using experimental_generateVideo
    • Image-to-video example for animating existing images
    • Video generation parameters (duration, aspectRatio, seed)
    • Provider options table (negative_prompt, guidance_scale, num_inference_steps, style, polling options)
  3. Added Video models table listing all 15 supported models:

    • 4 text-to-video models (t2v)
    • 11 image-to-video models (i2v)
    • From 6 companies: Pruna AI, Shengshu/Vidu, KwaiVGI/Kling, Alibaba/Wan, ByteDance/Seedance, OpenAI/Sora

The documentation follows the existing patterns in the file for image generation and matches the implementation details from the PR's README and source files.


Tip: Point @Promptless at some of your docs debt and have it clean them up in the background 🧹


## Video generation

Use `experimental_generateVideo` to generate videos from text prompts or images. The Runpod provider supports 15 video models, including Sora, Wan, Seedance, and Kling.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Citation: Video generation feature and model list sourced from PR #57, which adds videoModel()/video() methods implementing AI SDK v6's Experimental_VideoModelV3 interface. Model IDs, types, and provider options from src/runpod-video-options.ts and README.md updates.
View source

| `google-nano-banana-edit` | [Nano Banana Edit](/public-endpoints/models/nano-banana-edit). Supports multiple reference images. |
| `bytedance-seedream-4-0-t2i` | [Seedream 4.0](/public-endpoints/models/seedream-4-t2i). Text-to-image with good prompt adherence. |

### Video models
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Citation: 15 video models listed in PR #57 from VIDEO_MODEL_ID_TO_ENDPOINT_URL mapping in src/runpod-provider.ts. Companies and model types verified against PR description table.
View source

@promptless
Copy link
Contributor Author

promptless bot commented Mar 23, 2026

I've added new changes to address this GitHub PR in commit a402796

| `google-nano-banana-edit` | [Nano Banana Edit](/public-endpoints/models/nano-banana-edit). Supports multiple reference images. |
| `bytedance-seedream-4-0-t2i` | [Seedream 4.0](/public-endpoints/models/seedream-4-t2i). Text-to-image with good prompt adherence. |

### Video models
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Citation: Video models table updated to include Resolution, Aspect Ratios, and Duration columns based on changes in README.md. The PR also fixes alibaba/wan-2.2-t2v-720-lora model type and adds images array mapping for kwaivgi/kling-video-o1-r2v model.
View source

@mintlify
Copy link

mintlify bot commented Mar 23, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
runpod-docs 🟢 Ready View Preview Mar 23, 2026, 5:08 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants