Skip to content

Broom94/Tripline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tripline

Runtime observability and safety for AI agents. See what your agents are doing. Stop them if they misbehave.

pip install tripline

Quick Start

3 lines to observe your agent

from tripline import Tripline

t = Tripline(agent_id="my-agent")

@t.probe("tool_call", tool_name="SearchDatabase")
def search_database(query: str) -> list:
    return db.search(query)

# Every call to search_database now emits OTel spans
results = search_database("SELECT * FROM users")

t.shutdown()

Context manager style

with t.probe("llm_invoke", token_count=1500):
    response = llm.chat("Summarize this document")

Add safety rules

Create a policy.yaml:

allowlists:
  my-agent:
    - SearchDatabase
    - GetWeather

denylist:
  - DeleteAllRecords
  - DropDatabase
  - ExecuteShell

thresholds:
  rate_limit: 50          # max tool calls per minute
  token_budget: 100000    # max tokens per session
  latency_threshold: 30000  # max ms per tool call
t = Tripline(
    agent_id="my-agent",
    policy_file="policy.yaml",
)

# Agent calls a denied tool → ToolCallBlocked raised
# Agent exceeds rate limit → AgentTerminated raised
# Agent calls tool not in allowlist → ToolCallBlocked raised

Observe-only mode

See what rules would fire without affecting your agent:

t = Tripline(
    agent_id="my-agent",
    policy_file="policy.yaml",
    observe_only=True,  # log violations, don't enforce
)

Framework Integrations

# LangGraph
from tripline.integrations.langgraph import wrap_langgraph
graph = wrap_langgraph(my_graph, t)

# LangChain
from tripline.integrations.langchain import TriplineCallback
chain.invoke(input, config={"callbacks": [TriplineCallback(t)]})

# CrewAI
from tripline.integrations.crewai import TriplineListener
listener = TriplineListener(t)

# OpenAI
from tripline.integrations.openai import wrap_openai
client = wrap_openai(openai.OpenAI(), t)

# AWS Bedrock
from tripline.integrations.bedrock import wrap_bedrock
client = wrap_bedrock(boto3.client("bedrock-runtime"), t)

Install framework extras:

pip install tripline[langgraph]
pip install tripline[langchain]
pip install tripline[crewai]
pip install tripline[openai]
pip install tripline[bedrock]

View Your Data

Tripline emits standard OpenTelemetry spans. Use any OTel-compatible backend:

from opentelemetry.sdk.trace.export import ConsoleSpanExporter
from tripline import Tripline

# Console output (default — no setup needed)
t = Tripline(agent_id="my-agent")

# Jaeger (docker run jaegertracing/all-in-one)
from opentelemetry.exporter.jaeger.thrift import JaegerExporter
t = Tripline(agent_id="my-agent", exporter=JaegerExporter())

# Any OTel collector
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
t = Tripline(agent_id="my-agent", exporter=OTLPSpanExporter())

What Gets Captured

Every probe event includes:

Probe Type When It Fires Key Attributes
tool_call_enter Agent requests a tool tool_name, params_hash
tool_call_exit Tool returns result result_size, latency_ms, error_state
llm_invoke_enter Prompt sent to LLM token_count, prompt_hash
llm_invoke_exit LLM returns response response_hash, latency_ms, token_usage
agent_handoff Control passes between agents source_agent_id, target_agent_id
data_access Agent reads data source, query
memory_write Agent writes to memory memory_key, data_size
confidence_decision Confidence routing decision confidence_score, threshold_applied

Safety Actions

Action What Happens Use Case
BLOCK Reject the tool call, session continues Scope violation, denied tool
KILL Raise AgentTerminated, session ends Rate limit, token budget exceeded
ALERT Log violation, agent continues Latency warning, observe-only mode

How It Works

Agent calls tool
    │
    ▼
┌─────────────────────────────────┐
│  Probe (async mode)             │
│  1. Write event to ring buffer  │ ← nanoseconds, never blocks
│  2. Evaluate safety rules       │ ← microseconds, in-memory
│  3. Execute tool call           │
│  4. Write result to ring buffer │
└─────────────────────────────────┘

Background thread:
    Ring buffer → batch (every 100ms)
    → OTel exporter → your collector

Zero overhead on your agent's execution. The ring buffer is async, the rule engine is in-memory, and the background thread ships spans on its own schedule.

Requirements

  • Python 3.9+
  • Only dependency: opentelemetry-api + opentelemetry-sdk + pyyaml
  • Framework packages are optional extras

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages