Skip to content

aigateway-sh/examples

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AIgateway examples

Five copy-paste-ready starter projects, each smaller than 100 lines. Same key, every modality.

Folder What Run
chat/ Streaming chat completion through any of 150 models cd chat && pnpm i && pnpm start
image/ One-shot image generation (FLUX, Stable Diffusion, Imagen) cd image && pnpm i && pnpm start
voice/ Speech-to-text + text-to-speech round-trip (Whisper → Aura) cd voice && pnpm i && pnpm start
embeddings/ Encode + cosine-similarity with BGE M3 cd embeddings && pnpm i && pnpm start
multimodal-agent/ Vision → tool call → TTS in 50 lines (aigateway-py) cd multimodal-agent && pip install -r requirements.txt && python agent.py

One-time setup

Get a key at aigateway.sh/signin (free, no card). Set it once:

export AIGATEWAY_API_KEY=sk-aig-...

Then any example below runs with that env var in scope.

One-click Vercel deploy

Deploy with Vercel

Click → fork the public examples repo → set AIGATEWAY_API_KEY → live in 60 seconds. Swap chat in the URL with image, voice, embeddings, or multimodal-agent for the other starters.

Why these five

They cover the four modalities every AI app eventually touches (text, image, voice, embeddings) plus one composed agent that chains three of them. Once you've forked one, swapping models / providers is one string change.

If you build something interesting, email hello@aigateway.sh with a link — we publish a customers/ showcase from real use.

Bug reports + feature requests

support@aigateway.sh · @buildwithrakesh · LinkedIn

License

MIT

About

Runnable code examples for every AIgateway endpoint — chat, embeddings, image, audio, voice, evals, replays, sub-accounts.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • JavaScript 68.5%
  • Python 31.5%