Skip to content

imtt-dev/steer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Steer Logo

Steer SDK

Active Reliability Layer for AI Agents.

Steer is an open-source Python library that intercepts agent failures (hallucinations, bad JSON, PII leaks) and allows you to inject fixes via a local dashboard without changing your code.

PyPI version

The Problem

When an agent fails in production (e.g., outputs bad JSON), logging the error isn't enough. You usually have to:

  1. Dig through logs to find the prompt.
  2. Edit your prompt template manually.
  3. Redeploy the application.

The Solution

Steer wraps your agent function. When it detects a failure, it blocks the output and logs it to a local dashboard. You click "Teach" to provide a correction (e.g., "Use Strict JSON"), and Steer injects that rule into the agent's context for future runs.

Visual Workflow:

Steer Dashboard

Installation

pip install steer-sdk

Quickstart

Generate the example scripts to see the workflow in action:

steer init
# Generates 01_structure_guard.py, 02_safety_guard.py, etc.

steer ui
# Starts the local dashboard at http://localhost:8000

Run a demo (Split-screen recommended):

  1. Run python 01_structure_guard.py. It will fail (Blocked).
  2. Go to http://localhost:8000. Click Teach. Select "Strict JSON".
  3. Run python 01_structure_guard.py again. It will succeed.

Usage

Steer uses a decorator pattern to wrap your existing functions.

from steer import capture
from steer.verifiers import JsonVerifier

# 1. Define Verifiers
json_check = JsonVerifier(name="Strict JSON")

# 2. Decorate your Agent Function
@capture(verifiers=[json_check])
def my_agent(user_input, steer_rules=""):
    
    # 3. Pass 'steer_rules' to your system prompt.
    # Steer populates this argument automatically based on your teaching.
    system_prompt = f"You are a helpful assistant.\n{steer_rules}"
    
    # ... Your LLM call ...
    return llm.call(system_prompt, user_input)

Configuration

The Quickstart demos use a Mock LLM and require no API keys.

To use advanced LLM-based verifiers in production, set your environment variables:

export GEMINI_API_KEY=...
# OR
export OPENAI_API_KEY=...

About

The Active Reliability Layer for AI Agents. Catch failures, teach fixes, and automate reliability

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published