Skip to content

getstacklens-ai/stacklens-sdk-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

stacklens-sdk-python

Python SDK for StackLens — observability and governance for your AI stack.

Trace LLM calls, fetch versioned prompts, and enforce AI governance policies — in three lines of Python.

Installation

pip install getstacklens

Requires Python 3.9+.

Quickstart

import getstacklens

getstacklens.configure(api_key="sl-xxxx")
getstacklens.trace("my-llm-call", model="gpt-4o", provider="openai", input_tokens=150, output_tokens=200)

Get your API key from the StackLens dashboard under Settings → API Keys.

Tracing LLM calls

Simple trace (one line)

For accurate latency, record start_time before the call and pass it in:

from datetime import datetime, timezone
import getstacklens

getstacklens.configure(api_key="sl-xxxx")

start = datetime.now(timezone.utc)
response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Summarise this document."}],
)
getstacklens.trace(
    "chat-completion",
    model="gpt-4o",
    provider="openai",
    input_tokens=response.usage.prompt_tokens,
    output_tokens=response.usage.completion_tokens,
    start_time=start,
)

Context manager (recommended for agent workflows)

import openai
import getstacklens

getstacklens.configure(api_key="sl-xxxx")
client = openai.OpenAI()

with getstacklens.start_trace("customer-support-agent") as span:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "How do I reset my password?"}],
    )
    span.record_llm(
        model="gpt-4o",
        provider="openai",
        input_tokens=response.usage.prompt_tokens,
        output_tokens=response.usage.completion_tokens,
        completion=response.choices[0].message.content,
    )
    span.set_attribute("user_id", "u_123")
    span.add_tag("support", "production")

If an exception is raised inside the context, the span status is automatically set to error.

Fetching versioned prompts (FlowOps)

Manage prompts in the StackLens dashboard, then fetch them at runtime — no deploys needed.

import getstacklens

getstacklens.configure(api_key="sl-xxxx")

# Fetch the active prompt for the production environment
system_prompt = getstacklens.prompts.get("support-system-prompt", env="production")

# Use in an LLM call
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": user_message},
    ],
)

Available environments: "dev", "staging", "production" (default).

Self-hosted deployments

Point the SDK at your own StackLens instance:

getstacklens.configure(
    api_key="sl-xxxx",
    endpoint="https://api.your-domain.com",
)

See the self-hosting guide for setup instructions.

Supported providers

Works with any LLM provider — pass the model and provider name you use:

Provider provider value
OpenAI "openai"
Anthropic "anthropic"
Google Gemini "gemini"
Azure OpenAI "azure-openai"
AWS Bedrock "bedrock"
Any other any string

Links

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages