Skip to content

Self-eval-llm/Intune_OpenAI_Backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Self-Eval LLM Processing Backend πŸ€–

A Flask-based backend service that automatically generates context and expected outputs for LLM inference results stored in Supabase, using a local Ollama model.

🌟 Features

  • Batch Processing: Process all pending records at once
  • Continuous Monitoring: Automatically process new records as they arrive
  • REST API: Full REST API for external integrations
  • Supabase Integration: Seamless database operations
  • Local LLM: Uses Ollama for privacy-focused AI processing

πŸ“‹ Prerequisites

  • Python 3.8+
  • Ollama installed and running locally
  • Supabase account and project
  • local-gpt-oss:20b model pulled in Ollama

πŸš€ Quick Start

1. Clone and Setup

cd Backend

2. Create Virtual Environment

python3 -m venv venv
source venv/bin/activate  # On macOS/Linux
# or
venv\Scripts\activate  # On Windows

3. Install Dependencies

pip install -r requirements.txt

4. Configure Environment Variables

cp .env.example .env

Edit .env with your actual credentials:

SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-supabase-anon-key
OLLAMA_BASE_URL=http://localhost:11434
POLLING_INTERVAL=30

5. Ensure Ollama is Running

# Pull the model if you haven't already
ollama pull local-gpt-oss:20b

# Verify Ollama is running
curl http://localhost:11434/api/tags

6. Run the Backend

Option A: Continuous Monitoring Mode (Recommended)

python backend.py

Option B: Flask API Server

Uncomment the Flask section at the bottom of backend.py, then:

python backend.py

The server will start on http://localhost:5000

πŸ“‘ API Endpoints

Health Check

GET /api/health

Returns service health status and configuration.

Response:

{
  "status": "healthy",
  "ollama_url": "http://localhost:11434",
  "supabase_connected": true,
  "monitoring_active": true
}

Check Pending Records

GET /api/check-pending

Get count of records needing processing.

Response:

{
  "pending_count": 5,
  "records": [
    {
      "id": 1,
      "input": "What is machine learning..."
    }
  ]
}

Get Monitoring Status

GET /api/monitoring-status

Check if continuous monitoring is active.

Response:

{
  "monitoring_active": true,
  "polling_interval": 30
}

Process All Records (Batch)

POST /api/process-all

Process all pending records in one go.

Response:

{
  "total_records": 10,
  "successful": 9,
  "failed": 1,
  "results": [...]
}

Process Specific Record

POST /api/process-record/{record_id}

Process a single record by ID.

Example:

curl -X POST http://localhost:5000/api/process-record/123

Response:

{
  "record_id": 123,
  "success": true,
  "expected_output": "...",
  "context": ["point 1", "point 2"]
}

Start Monitoring

POST /api/start-monitoring

Start continuous background monitoring.

Response:

{
  "status": "Monitoring started",
  "interval": 30
}

Stop Monitoring

POST /api/stop-monitoring

Stop continuous monitoring.

Response:

{
  "status": "Monitoring stopped"
}

πŸ—„οΈ Database Schema

Expected inference_results table structure in Supabase:

CREATE TABLE inference_results (
  id SERIAL PRIMARY KEY,
  input TEXT NOT NULL,
  actual_output TEXT NOT NULL,
  expected_output TEXT,
  context TEXT,  -- JSON array stored as text
  created_at TIMESTAMP DEFAULT NOW(),
  updated_at TIMESTAMP
);

πŸ”§ Configuration

Environment Variables

Variable Description Default
SUPABASE_URL Your Supabase project URL Required
SUPABASE_KEY Supabase anon/service key Required
OLLAMA_BASE_URL Ollama API endpoint http://localhost:11434
POLLING_INTERVAL Seconds between checks 30

πŸ§ͺ Testing with cURL

# Health check
curl http://localhost:5000/api/health

# Check pending
curl http://localhost:5000/api/check-pending

# Process all
curl -X POST http://localhost:5000/api/process-all

# Process specific record
curl -X POST http://localhost:5000/api/process-record/1

# Start monitoring
curl -X POST http://localhost:5000/api/start-monitoring

# Stop monitoring
curl -X POST http://localhost:5000/api/stop-monitoring

πŸ› Troubleshooting

Ollama Connection Error

❌ Error calling Ollama: Connection refused

Solution: Ensure Ollama is running: ollama serve

Supabase Connection Error

❌ Error fetching records from Supabase

Solution: Verify your SUPABASE_URL and SUPABASE_KEY in .env

Model Not Found

❌ Error: model 'local-gpt-oss:20b' not found

Solution: Pull the model: ollama pull local-gpt-oss:20b

πŸ“¦ Project Structure

Backend/
β”œβ”€β”€ backend.py              # Main application
β”œβ”€β”€ requirements.txt        # Python dependencies
β”œβ”€β”€ .env                   # Environment variables (create from .env.example)
β”œβ”€β”€ .env.example           # Environment template
β”œβ”€β”€ .gitignore            # Git ignore rules
β”œβ”€β”€ README.md             # This file
└── postman_collection.json # Postman API collection

🀝 Contributing

Feel free to submit issues and enhancement requests!

πŸ“„ License

MIT License

πŸ™‹ Support

For questions or issues, please open an issue in the repository.


Made with ❀️ using Flask, Supabase, and Ollama

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages