Tinkerflow-AI
GitHub App
Tinkerflow-AI
GitHub App
π§± TinkerflowβAI β App Authority Wall
π‘οΈ GitDigital Badge Authority β Governing Authority Wall
π Authority Badges
π§± TinkerflowβAI β App Authority
π‘οΈ Governing Authority β GitDigital Badge Authority
Tinkerflow-AI
Tinkerflow AI Gateway
A unified, secure, and scalable proxy for multiple AI providers
Tinkerflow AI Gateway sits between your applications and AI services like OpenAI and Ollama, providing a single API endpoint with built-in authentication, rate limiting, metrics, and flexible model routing. It's designed for developers who want to manage multiple AI backends without changing their code.
π Why Tinkerflow?
Β· Unified API β Use the same OpenAI-compatible interface for all providers.
Β· Productionβready β Redisβbacked rate limiting, Prometheus metrics, and comprehensive tests.
Β· Extensible β Easily add new providers (Anthropic, Cohere, etc.) via simple modules.
Β· Selfβhosted β Full control over your data and costs.
β¨ Features
Feature Description
π API Key Authentication Simple sharedβsecret security with header or query parameter support.
β±οΈ Rate Limiting Sliding window rate limiting per key (inβmemory for dev, Redis for production).
π€ MultiβProvider Route requests to OpenAI, Ollama, or custom backends.
π§ Model Mapping YAML configuration to map model names to providers (e.g., gptβ4 β openai).
π Streaming Full support for ServerβSent Events (SSE) β compatible with OpenAI's streaming API.
π Metrics Prometheus metrics for requests, latencies, and errors.
π Python Client Dropβin replacement for the OpenAI Python library.
π§ͺ Tested 90%+ test coverage with GitHub Actions CI.
π¦ Quick Start (2 minutes)
# Clone and enter
git clone https://github.com/RickCreator87/Tinkerflow-AI.git
cd Tinkerflow-AI
# Install dependencies
pip install -r requirements.txt
# Configure (copy and edit .env)
cp .env.example .env
# Edit with your OpenAI API key and gateway secret
# Run Redis (if not already)
docker run -d -p 6379:6379 redis
# Start the gateway
uvicorn gateway.main:app --reloadThe gateway is now live at http://localhost:8000. Test it:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Authorization: Bearer your-gateway-api-key" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "Hello!"}]}'ποΈ Architecture Overview
βββββββββββ βββββββββββββββββββββββββ ββββββββββββββββ
β Client β βββΊ β Tinkerflow Gateway β βββΊ β OpenAI β
β (your β β - Auth β ββββββββββββββββ€
β app) β β - Rate limiting β βββΊ β Ollama β
βββββββββββ β - Model routing β ββββββββββββββββ€
β - Metrics β βββΊ β (other) β
βββββββββββββββββββββββββ ββββββββββββββββ
β
βΌ
ββββββββββββ
β Redis β
β (rate β
β limits) β
ββββββββββββ
Β· All requests hit the gateway first.
Β· The gateway authenticates, rateβlimits, and routes based on model field.
Β· Responses (including streaming) are proxied transparently.
Β· Metrics are exposed for Prometheus.
π Documentation
All documentation is in the repository:
Β· Installation & Setup
Β· Configuration Reference
Β· Usage Examples
Β· Model Management
Β· Security Notes
Β· Troubleshooting
Β· Roadmap
π§© Use Cases
Β· Centralized AI access for multiple microservices.
Β· Cost control β enforce rate limits per team or project.
Β· Local development β use Ollama locally, switch to OpenAI in production.
Β· Vendorβagnostic β change providers without rewriting code.
π€ Contributing
Contributions are welcome! Check the Roadmap for ideas or open an issue.
π License
MIT Β© RickCreator87
Built with β€οΈ using FastAPI, Redis, and a lot of coffee.
Developer
Tinkerflow-AI is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.
Report abuse