Five copy-paste-ready starter projects, each smaller than 100 lines. Same key, every modality.
| Folder | What | Run |
|---|---|---|
chat/ |
Streaming chat completion through any of 150 models | cd chat && pnpm i && pnpm start |
image/ |
One-shot image generation (FLUX, Stable Diffusion, Imagen) | cd image && pnpm i && pnpm start |
voice/ |
Speech-to-text + text-to-speech round-trip (Whisper → Aura) | cd voice && pnpm i && pnpm start |
embeddings/ |
Encode + cosine-similarity with BGE M3 | cd embeddings && pnpm i && pnpm start |
multimodal-agent/ |
Vision → tool call → TTS in 50 lines (aigateway-py) |
cd multimodal-agent && pip install -r requirements.txt && python agent.py |
Get a key at aigateway.sh/signin (free, no card). Set it once:
export AIGATEWAY_API_KEY=sk-aig-...Then any example below runs with that env var in scope.
Click → fork the public examples repo → set AIGATEWAY_API_KEY → live in 60 seconds. Swap chat in the URL with image, voice, embeddings, or multimodal-agent for the other starters.
They cover the four modalities every AI app eventually touches (text, image, voice, embeddings) plus one composed agent that chains three of them. Once you've forked one, swapping models / providers is one string change.
If you build something interesting, email hello@aigateway.sh with a link — we publish a customers/ showcase from real use.
support@aigateway.sh · @buildwithrakesh · LinkedIn
MIT