Provider changes, Opsmeter payload stays the same.
Working examples for sending telemetry to POST /v1/ingest/llm-request in:
- .NET (
examples/dotnet) - Node.js (
examples/node) - Python (
examples/python)
This repo is optimized for teams implementing LLM cost tracking, OpenAI usage monitoring, Anthropic usage telemetry, and AI inference cost control with a consistent request schema.
Opsmeter product site: https://opsmeter.io
Opsmeter API base: https://api.opsmeter.io
Provider + catalog model names: https://opsmeter.io/docs/catalog
Official SDK package identities for opsmeter.io:
- Node (npm): @opsmeter.io/node
- Python (PyPI): opsmeter-io-sdk
Current provider support in examples: OpenAI and Anthropic only.
This repository targets LLM telemetry quickstart, OpenAI cost tracking examples, Anthropic integration examples, and AI cost observability setup keywords.
- No-proxy telemetry: keep your provider call path untouched, send attribution metadata after each LLM call.
- Retry-safe ingestion: reuse
externalRequestIdon retries to prevent duplicate request rows. - Cost attribution dimensions: keep
endpointTagandpromptVersionconsistent for feature-level and version-level analysis.
- Direct ingest (current production path): Quickstart + payload contract + language examples under
examples/. - SDK auto-instrumentation (preview path): moved to dedicated SDK repositories.
- Direct ingest docs stay valid; SDK docs are additive and do not replace existing integration docs.
- Official domain and product identity: https://opsmeter.io
- Official Node package name:
@opsmeter.io/node - Official Python package name:
opsmeter-io-sdk - Model catalog for both SDKs: https://opsmeter.io/docs/catalog
- Quickstart (60s)
- Documentation paths
- Official package identity (opsmeter.io)
- Payload contract (shared)
- Allowed values
- Recommended combinations
- Architecture
- Quick visual
- Examples
- Example modes
- Launch-ready wedges
- SDK preview
- n8n templates
- Common mistakes
- CI / Quality gates
- Product linking text (for docs/pricing/landing)
- Release
- SEO and discoverability notes
- Clone and set your API key.
git clone https://github.com/opsmeter-io/opsmeter.io-integration-examples.git
cd opsmeter-integration-examples
export OPSMETER_API_KEY="<YOUR_WORKSPACE_PRIMARY_API_KEY>"
export OPSMETER_API_BASE_URL="https://api.opsmeter.io"- Run one stack (Node shown below):
# Provider/model names: https://opsmeter.io/docs/catalog
node examples/node/index.mjs --provider openai --model gpt-4o-mini --retry- Expected output:
Business call completed.
Telemetry dispatched (non-blocking).
Ingest response: 200 ok=true planTier=Free warnings=0
Retry with same externalRequestId sent.
- Verify in product:
- Dashboard request count increases.
endpointTagandpromptVersionappear in Top Endpoints / Prompt Versions.
--retryuses the sameexternalRequestIdto demonstrate retry-safe behavior.
Canonical ingest endpoint: https://api.opsmeter.io/v1/ingest/llm-request
All examples send this same shape:
{
"externalRequestId": "ext_123abc",
"provider": "openai",
"model": "gpt-4o-mini",
"promptVersion": "summary_v3",
"endpointTag": "checkout.ai_summary",
"inputTokens": 120,
"outputTokens": 45,
"totalTokens": 165,
"latencyMs": 820,
"status": "success",
"errorCode": null,
"userId": null,
"dataMode": "real",
"environment": "prod"
}provider and model values should be selected from the catalog: https://opsmeter.io/docs/catalog
| Field | Allowed | Notes |
|---|---|---|
provider |
openai, anthropic |
Current support in this repo/examples |
status |
success, error |
Required by API validation |
dataMode |
real, test, demo |
Default recommendation: real |
environment |
prod, staging, dev |
Use real deployment environment |
| Use case | dataMode |
environment |
|---|---|---|
| Production traffic | real |
prod |
| QA/Test traffic | test |
staging or dev |
| Seed/demo flows | demo |
dev |
If you do not label these fields correctly, dashboard analytics can mix operational and non-production signals.
flowchart LR
A["LLM call"] --> B["Map usage + latency"]
B --> C["Build Opsmeter payload"]
C --> D["POST /v1/ingest/llm-request"]
D --> E["Dashboard / Budgets / Alerts"]
- Node examples (without SDK + with SDK)
- Python examples (without SDK + with SDK)
- Dotnet examples (without SDK + with SDK)
These are lighter assets optimized for GitHub discovery, founder distribution, and quick proof-of-value sharing.
Important framing:
-
The payload contract stays provider-agnostic.
-
These folders are provider-specific entry points for faster evaluation.
-
The generic direct-ingest examples under
examples/node,examples/python, andexamples/dotnetremain the canonical shared pattern. -
OpenAI cost tracker example (Node)
- Real OpenAI call + usage extraction + direct-ingest telemetry
- Best first asset for JavaScript-heavy product teams evaluating AI cost tracking with minimal setup
-
Anthropic cost tracker example (Node)
- Real Anthropic Messages API call + usage extraction + direct-ingest telemetry
- Keeps the same Opsmeter payload shape while changing only the provider call
-
OpenAI cost tracker example (Python)
- Same wedge for Python-heavy AI backends and internal tools
- Useful when the evaluation owner is closer to ML or backend workflows
-
Anthropic cost tracker example (Python)
- Same provider-specific entry point for Python teams using Claude
- Makes the generic telemetry pattern obvious across providers
- Without SDK (direct ingest): existing stable production path in this repo.
- Includes explicit send scenarios for both OpenAI and Anthropic.
- With SDK (preview): usage examples in this repo, SDK packages in dedicated repos:
- Includes OpenAI + Anthropic capture/send scenarios in language samples.
- Node SDK repo: github.com/opsmeter-io/opsmeter.io-node-sdk
- Node npm package (published): npmjs.com/package/@opsmeter.io/node
- Python SDK repo: github.com/opsmeter-io/opsmeter.io-python-sdk
- Python package (published): pypi.org/project/opsmeter-io-sdk
- .NET SDK repo: coming soon
Preview SDK contracts and reference implementations are maintained in dedicated repositories:
- Node SDK (repo): github.com/opsmeter-io/opsmeter.io-node-sdk
- Node SDK (npm): npmjs.com/package/@opsmeter.io/node
- Python SDK (repo): github.com/opsmeter-io/opsmeter.io-python-sdk
- Python SDK (PyPI): pypi.org/project/opsmeter-io-sdk
- .NET SDK (repo): coming soon
- .NET package: coming soon
These templates are for Opsmeter n8n integration with workspace status branching, budget warning automation, and telemetry paused handling.
Path: ./n8n
workspace-status-check.json: pollsGET /v1/diagnostics/workspace-statusand branches by plan/budget booleans.budget-warning-to-slack.json: scheduled budget status check with Slack notification path.openai-to-opsmeter-ingest.json: provider call + usage mapping + ingest + 402 plan-limit branch.- Import and setup guide:
n8n/README.md
Common mistakes
- Provider/model typo (example: wrong provider string), causing unknown model attribution.
- Generating a new
externalRequestIdfor retries (breaks idempotent behavior).- Blocking the request path with long telemetry timeouts.
- Treating telemetry failure as business failure (it should be swallowed/logged).
- Node lint + tests
- Python lint + tests
- Dotnet build + tests
- Smoke script runs all three examples in dry-run mode
See .github/workflows/ci.yml and scripts/smoke.sh.
Use this exact label when linking from the main product:
Integration examples (60-second quickstart)
Target URL:
https://github.com/opsmeter-io/opsmeter.io-integration-examples
Current bootstrap release target: v0.1.0 (see CHANGELOG).
Primary terms covered in this repository:
- Opsmeter integration examples
- LLM cost tracking integration
- OpenAI usage monitoring
- Anthropic telemetry integration
- AI inference cost control