A Flask-based backend service that automatically generates context and expected outputs for LLM inference results stored in Supabase, using a local Ollama model.
- Batch Processing: Process all pending records at once
- Continuous Monitoring: Automatically process new records as they arrive
- REST API: Full REST API for external integrations
- Supabase Integration: Seamless database operations
- Local LLM: Uses Ollama for privacy-focused AI processing
- Python 3.8+
- Ollama installed and running locally
- Supabase account and project
local-gpt-oss:20bmodel pulled in Ollama
cd Backendpython3 -m venv venv
source venv/bin/activate # On macOS/Linux
# or
venv\Scripts\activate # On Windowspip install -r requirements.txtcp .env.example .envEdit .env with your actual credentials:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-supabase-anon-key
OLLAMA_BASE_URL=http://localhost:11434
POLLING_INTERVAL=30# Pull the model if you haven't already
ollama pull local-gpt-oss:20b
# Verify Ollama is running
curl http://localhost:11434/api/tagsOption A: Continuous Monitoring Mode (Recommended)
python backend.pyOption B: Flask API Server
Uncomment the Flask section at the bottom of backend.py, then:
python backend.pyThe server will start on http://localhost:5000
GET /api/healthReturns service health status and configuration.
Response:
{
"status": "healthy",
"ollama_url": "http://localhost:11434",
"supabase_connected": true,
"monitoring_active": true
}GET /api/check-pendingGet count of records needing processing.
Response:
{
"pending_count": 5,
"records": [
{
"id": 1,
"input": "What is machine learning..."
}
]
}GET /api/monitoring-statusCheck if continuous monitoring is active.
Response:
{
"monitoring_active": true,
"polling_interval": 30
}POST /api/process-allProcess all pending records in one go.
Response:
{
"total_records": 10,
"successful": 9,
"failed": 1,
"results": [...]
}POST /api/process-record/{record_id}Process a single record by ID.
Example:
curl -X POST http://localhost:5000/api/process-record/123Response:
{
"record_id": 123,
"success": true,
"expected_output": "...",
"context": ["point 1", "point 2"]
}POST /api/start-monitoringStart continuous background monitoring.
Response:
{
"status": "Monitoring started",
"interval": 30
}POST /api/stop-monitoringStop continuous monitoring.
Response:
{
"status": "Monitoring stopped"
}Expected inference_results table structure in Supabase:
CREATE TABLE inference_results (
id SERIAL PRIMARY KEY,
input TEXT NOT NULL,
actual_output TEXT NOT NULL,
expected_output TEXT,
context TEXT, -- JSON array stored as text
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP
);| Variable | Description | Default |
|---|---|---|
SUPABASE_URL |
Your Supabase project URL | Required |
SUPABASE_KEY |
Supabase anon/service key | Required |
OLLAMA_BASE_URL |
Ollama API endpoint | http://localhost:11434 |
POLLING_INTERVAL |
Seconds between checks | 30 |
# Health check
curl http://localhost:5000/api/health
# Check pending
curl http://localhost:5000/api/check-pending
# Process all
curl -X POST http://localhost:5000/api/process-all
# Process specific record
curl -X POST http://localhost:5000/api/process-record/1
# Start monitoring
curl -X POST http://localhost:5000/api/start-monitoring
# Stop monitoring
curl -X POST http://localhost:5000/api/stop-monitoringβ Error calling Ollama: Connection refused
Solution: Ensure Ollama is running: ollama serve
β Error fetching records from Supabase
Solution: Verify your SUPABASE_URL and SUPABASE_KEY in .env
β Error: model 'local-gpt-oss:20b' not found
Solution: Pull the model: ollama pull local-gpt-oss:20b
Backend/
βββ backend.py # Main application
βββ requirements.txt # Python dependencies
βββ .env # Environment variables (create from .env.example)
βββ .env.example # Environment template
βββ .gitignore # Git ignore rules
βββ README.md # This file
βββ postman_collection.json # Postman API collection
Feel free to submit issues and enhancement requests!
MIT License
For questions or issues, please open an issue in the repository.
Made with β€οΈ using Flask, Supabase, and Ollama