A tiny, extensible observability layer for LLM calls. Add three lines around your code and get JSON traces for requests, responses, timings, and errors.
This repo bundles an example using OpenAI.
- Install the package from source (with OpenAI + example extras):
pip install -e .[openai,example]
- Provide credentials in
.env(at repo root or inexample/):OPENAI_API_KEY=your_key_here- optional:
OPENAI_MODEL=gpt-4o-mini
Using the SDK in your own project:
pip install .installs the core (no provider deps).pip install .[openai]adds OpenAI support.
from aiobs import observer
observer.observe() # start a session and auto-instrument providers
# ... make your LLM calls (e.g., OpenAI Chat Completions) ...
observer.end() # end the session
observer.flush() # write a single JSON file to diskBy default, events flush to ./llm_observability.json. Override with LLM_OBS_OUT=/path/to/file.json.
-
Simple single-file example:
python example/simple-chat-completion/chat.py- Prints the model’s reply and writes events to
llm_observability.jsonin the repo root.
-
Multi-file pipeline example:
python -m example.pipeline.main "Explain vector databases to a backend engineer"- Runs a 3-step pipeline (research -> summarize -> critique) with multiple Chat Completions calls chained together.
- Provider:
openai(Chat Completions v1) - Request: model, first few
messages, core params - Response: text, model, token usage (when available)
- Timing: start/end timestamps,
duration_ms - Errors: exception name and message if the call fails
- Callsite: file path, line number, and function name where the API was called
Internally, the SDK structures data with Pydantic models (v2):
aiobs.models.Sessionaiobs.models.Eventaiobs.models.ObservedEvent(Event +session_id)aiobs.models.ObservabilityExport(flush payload)
These are exported to allow downstream tooling to parse and validate the JSON output and to build integrations.
Providers are classes that implement a small abstract interface and install their own hooks.
- Base class:
aiobs.providers.base.BaseProvider - Built-in:
OpenAIProvider(auto-detected and installed ifopenaiis available)
Custom provider skeleton:
from aiobs import BaseProvider
class MyProvider(BaseProvider):
name = "my-provider"
@classmethod
def is_available(cls) -> bool:
try:
import my_sdk # noqa: F401
return True
except Exception:
return False
def install(self, collector):
# monkeypatch or add hooks into your SDK, then
# call collector._record_event({ ... normalized payload ... })
def unpatch():
pass
return unpatch
# Register before observe()
from aiobs import observer
observer.register_provider(MyProvider())
observer.observe()If you don’t explicitly register providers, the collector auto-loads built-ins (OpenAI) when observe() is called.
- Core
Collectorholds sessions/events and flushes a single JSON file.aiobs.models.*define Pydantic schemas for sessions/events/export.
- Providers (N-layered)
providers/base.py:BaseProviderinterface.providers/openai/provider.py: orchestrates OpenAI API modules.providers/openai/apis/base_api.py: API module interface.providers/openai/apis/chat_completions.py: instrumentschat.completions.create.providers/openai/apis/models/*: Pydantic request/response schemas per API.
Providers construct Pydantic request/response models and pass typed Event objects to the collector; only the collector serializes to JSON.
Sphinx documentation lives under docs/.
- Install docs deps (note the quotes for zsh):
pip install '.[docs]'
- Build HTML docs:
python -m sphinx -b html docs docs/_build/html
- Open
docs/_build/html/index.htmlin your browser.
GitHub Pages
- Docs auto-deploy from
mainvia GitHub Actions (pages.yml). - After merging to
main, the site is available at:https://neuralis-in.github.io/llm-observability/- If your org/user or repo name differs, GitHub Pages uses
https://<owner>.github.io/<repo>/.