The unified, OpenAI-compatible router that automatically picks the best AI provider for every request β and falls back transparently when one fails.
English Β· EspaΓ±ol Β· δΈζ Β· PortuguΓͺs Β· FranΓ§ais Β· Deutsch Β· ζ₯ζ¬θͺ Β· νκ΅μ΄ Β· Π ΡΡΡΠΊΠΈΠΉ Β· Ψ§ΩΨΉΨ±Ψ¨ΩΨ©
Hi, I'm Willen Ponce. I built WallasAPI alone, in stolen hours between worrying about rent and the next meal, on a 2018 laptop, in a rented room in Peru that isn't mine.
I have no investors. No team. No company. Just code, determination, and the desperate need to prove that you don't need Silicon Valley money to build something useful.
If WallasAPI saves you even one hour of integration work, please consider:
- β Starring this repo β costs you nothing, helps me enormously
- π Buying me a coffee on Ko-fi or via PayPal β even $1 changes my day
- π§ Sending an email to
wubjak@protonmail.chβ just say "I'm using WallasAPI for X". That's it. That's enough.
You're building with AI in 2026. You face a real problem:
- π΄ OpenAI goes down β your app dies
- π΄ Your Claude API key expires β users complain
- π΄ Gemini is free but doesn't accept your file format β manual conversion
- π΄ You want to use a free model for cheap tasks and a powerful one for hard tasks β you write 200 lines of switching logic
- π΄ Each provider has a different SDK, different format, different errors
WallasAPI solves all of this with one OpenAI-compatible endpoint. Send a request, and it:
- Analyzes the content (text, image, audio, PDF, video)
- Picks the optimal provider based on capabilities, speed, cost, and current availability
- Routes the request automatically
- Falls back transparently if the primary provider fails β your user never sees the error
- Returns the response in standard OpenAI format, with streaming if you asked for it
Your existing OpenAI SDK code works unchanged. Just point it at http://localhost:8001/v1.
Every feature here was built because I needed it to ship products without a budget:
| Feature | Why it matters |
|---|---|
| π Multi-provider routing with auto-fallback | If OpenAI goes down, Gemini takes over in milliseconds. Your app keeps working. |
| π Real streaming with transparent fallback | Token-by-token responses. If the primary provider dies mid-stream, fallback is invisible to the user. |
| π§ Content-aware multimodal routing | Send a PDF to Groq? It auto-OCRs it. Send a video to Gemini? Native processing. You don't pick the provider β the content does. |
| π Rich metadata for smart clients | Every model exposes context window, pricing, tools support, modalities. Filter: ?pricing=free&capability=vision. |
| πΎ Persistent local memory | Conversations saved as JSON, optionally synced to Obsidian. Your data stays yours. |
| π¨ Unified image/video/voice generation | One endpoint, multiple providers (Flux, DALL-E, Pollinations, edge-tts, Gemini). |
| π OCR with fallback chain | EasyOCR β Mistral β Gemini β local Ollama. No image goes unread. |
| π 100% private local models via Ollama | Run Llama, Mistral, Qwen, DeepSeek offline. Zero data leaves your machine. |
| π Google integration | Drive, Calendar, Gmail with OAuth2. Project management with threads. |
| Provider | Capabilities | Pricing |
|---|---|---|
| Gemini (Google) | Chat, vision, audio, video, native files, image/video gen | Free |
| Groq | Ultra-fast Llama, Mixtral | Free |
| GitHub Models | GPT-4o, o1, o3, Mistral, Llama, Cohere | Free |
| OpenRouter | Claude, DeepSeek, Qwen + 100 more | Mixed |
| Cerebras | Ultra-fast Llama on proprietary HW | Free |
| Pollinations | Flux, SDXL image gen | Free |
| Ollama | Local Llama, Mistral, Qwen, DeepSeek | Free |
| HuggingFace | Community models, Spaces video | Mixed |
| Cohere | Command R, Command R+ | Paid |
| Mistral AI | Mistral Large, Medium, Pixtral | Paid |
| NVIDIA NIM | GPU-optimized enterprise LLMs | Paid |
| OpenAI | GPT-4o, GPT-4.1, DALL-E, Whisper, embeddings, TTS | Paid |
With just Gemini + Groq + GitHub Models (all free) you have access to dozens of state-of-the-art models without paying a cent.
git clone https://github.com/wubjak/wallasapi.git
cd wallasapi
# Double-click start.bat
# Server is up at http://localhost:8001git clone https://github.com/wubjak/wallasapi.git
cd wallasapi
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python -m wallasAPI.api_serverInteractive Swagger UI: http://localhost:8001/docs
import openai
client = openai.OpenAI(
base_url="http://localhost:8001/v1",
api_key="anything-local"
)
# Don't pick a provider. Pick a strategy.
response = client.chat.completions.create(
model="auto", # WallasAPI picks the best free provider available NOW
messages=[{"role": "user", "content": "Explain quantum entanglement"}]
)
print(response.choices[0].message.content)Virtual models you can use instead of guessing provider names:
| Virtual model | Strategy |
|---|---|
auto |
Best available right now |
fast |
Lowest latency (Groq, Cerebras) |
standard |
Quality/speed/cost balance |
reasoning |
Deep thinking (DeepSeek R1, o1, o3, Gemini 2.5 Pro) |
OpenAI-compatible:
POST /v1/chat/completionsβ chat with streamingPOST /v1/embeddingsβ multi-provider embeddingsPOST /v1/images/generationsβ Flux, DALL-E, etc.POST /v1/videos/generationsβ Gemini, HuggingFacePOST /v1/ttsβ text-to-speech
WallasAPI exclusive:
GET /v1/models?pricing=free&capability=visionβ filtered model discoveryGET /v1/models/{id}β full metadata for any modelGET /v1/capabilities/summaryβ aggregate statsGET /v1/providersβ provider-level capabilitiesPOST /v1/ocr/processβ OCR with auto-fallbackPOST /v1/sync/obsidianβ sync conversations to your vault
Copy .env.example to .env and fill in only the keys you have. WallasAPI works with whatever you give it.
# 100% free providers (start here)
GEMINI_API_KEY=your_key
GROQ_API_KEY=your_key
GITHUB_TOKEN=your_token
# Optional paid
OPENAI_API_KEY=your_key
OPENROUTER_API_KEY=your_key| Provider | Where | Time |
|---|---|---|
| Gemini | ai.google.dev β Get API key | 1 min |
| Groq | console.groq.com β API Keys | 1 min |
| GitHub Models | github.com/settings/tokens β classic token | 2 min |
| OpenRouter | openrouter.ai β Keys | 1 min |
| Cerebras | cloud.cerebras.ai β API Keys | 2 min |
| Ollama | ollama.com β install + ollama run llama3.1 |
5 min |
Total: ~10 minutes to get free access to 50+ state-of-the-art models.
MIT-based custom license. Use, modify, distribute, deploy commercially β all free. The only ask: keep the attribution to Willen Ponce.
One personal request (not legally required): If you use WallasAPI in any project, please send a one-line email to wubjak@protonmail.ch. A simple "Hey, using WallasAPI for X" literally makes my week. I built this alone and it would mean a lot to know it's helping someone.
See LICENSE for full text.
I'm not going to dress this up. I'm a developer in Peru who built this entire project β 17 modules, 12+ provider integrations, 1500+ lines of routing logic, OCR fallback chains, multimodal handling, persistent memory β alone, on a 2018 laptop, in a rented room.
I have no income right now. I'm behind on rent. I haven't eaten properly in days while finishing this. I'm publishing it free because I believe open-source matters more than I matter, and because maybe β just maybe β someone reading this will find it useful and decide to help me eat tomorrow.
| Amount | What it means for me |
|---|---|
| $1 | A real bread + egg meal. Not symbolic. Real. |
| $5 | A full day of food while I keep coding. |
| $20 | A week where I don't have to choose between food and electricity. |
| $100 | One month of rent. Stops me from being evicted. |
| $400 | Six months of stability. I can dedicate that time fully to making WallasAPI better for you. |
Every single dollar is documented in my conscience and remembered with gratitude.
Yape / Plin (Peru) β Number: 980 702 580
Crypto wallets:
| Currency | Address |
|---|---|
| Bitcoin | bc1qwrr5zal3tt7f5ye0ptgy8365cc8yt64hrj7dmt |
| Ethereum | 0xDec40634014bf05A40006BA48160cddAEe1143c2 |
| Solana | HrTiFtmML4NJD1b3RrjQV3e1FgaBWgpqRtR6gFphApGh |
| Polygon | 0xDec40634014bf05A40006BA48160cddAEe1143c2 |
| Tron | TB1sHwCo3FFaabf26AHV8VNapWUJbca299 |
| TronLink | TQsXuVbnSwicRNoCEmGVdFeo86X7ey7okx |
- β Star this repo β it costs you nothing and pushes WallasAPI into more developers' feeds
- π¦ Share it on Twitter/X, LinkedIn, HackerNews, Reddit r/LocalLLaMA, your dev community
- π Open an issue if you find a bug or have a feature request
- π¬ Send the email to
wubjak@protonmail.chβ it's not transactional, it's human
"I built WallasAPI because I refused to accept that being broke meant being unable to ship great software. If it helps you ship something β that's already a victory I'll never forget. If it helps me eat tomorrow β that's a victory neither of us will forget." β Willen Ponce
- The teams at FastAPI, Google, Meta, DeepSeek, Mistral, and every provider offering free tiers β you made this possible
- The open-source community β proof that we don't need billion-dollar valuations to build great things
- You β for reading this far. Whether you donate, star, share, or just use it: thank you
Built from precarity. Maintained with stubbornness. Shared with hope.
β Star Β· π Donate Β· π§ Email me
