A production-ready Python logging framework designed for modern backends and AI pipelines.
It supports:
- Structured JSON logs (ELK / GCP Logging / Datadog ready)
- Trace + Span IDs (contextvars, async-safe)
- Automatic context (file:line Class.method())
- Per-module log levels
- Sampling filters for hot paths (reduce noise + cost)
- Overview
- Architecture
- Features
- Installation
- Usage
- Configuration
- Project Structure
- Development
- Testing
- Deployment
- Roadmap
- Contributing
- License
- Contact
This project provides a clean, extensible logging layer for Python services.
It’s built to solve common production pain points:
- Logs without structure are hard to search
- Async + threads break context
- Trace correlation is missing
- Hot loops spam logs and increase cloud costs
- Large codebases need per-module control
This logger provides a single unified API for:
- console logs (pretty + colored)
- file logs (JSON structured)
- trace/span correlation
- sampling
LoggerConfig
↓
Logger (singleton)
↓
logging.Logger (stdlib)
↓
Handlers
├── Console handler (colored)
└── File handler (.log or .json)
↓
Filters
└── SamplingFilter
└── DeterministicFilter
↓
ContextLogger (LoggerAdapter)
├── context
├── trace_id
└── span_id
- API Layer: Interfaces (CLI, SDK, REST) for uploading and managing files
- Service Layer: Business logic for versioning and lifecycle rules
- Repository Layer: Abstract persistence interfaces for storage and metadata
- Infrastructure Layer: Cloud storage providers and NoSQL database adapters
-
Structured JSON logging
- Compatible with ELK, GCP Logging, Datadog, Splunk
- Includes context, trace_id, span_id, module, function, line, exception
-
Trace + Span IDs
- Uses contextvars (async-safe)
- Supports manual trace/span injection
-
Automatic context resolution
- Adds file:line Class.method() automatically
- Works for functions, methods, decorators
-
Per-module log levels
- Example: src.core=WARNING while the rest stays INFO
-
Sampling filter
- Sample DEBUG/INFO logs for noisy paths
- Always keep WARNING/ERROR/CRITICAL
-
Singleton logger factory
- Global configuration
- Safe Logger.configure(...) entrypoint
pip install <url>Or install from source:
git clone <url>
cd project-name
pip install -e .Basic setup
import logging
from src.logger import Logger
from src.config import LoggerConfig
Logger.configure(
LoggerConfig(
name="app",
level=logging.INFO,
directory="logs",
json_logs=True,
sample_rate=0.2,
module_levels={
"src.core": logging.WARNING,
},
)
)
logger = Logger().bind("startup")
logger.info("Service initialized")Trace + ID span
from src.logger import Logger
log = Logger()
log.set_trace()
log.set_span()
logger = log.bind("request")
logger.info("Request started")JSON logs example output
{
"timestamp": "2026-02-04T10:55:01.140Z",
"level": "INFO",
"message": "Request started",
"logger": "app",
"context": "api.py:88 EDDController.calculate()",
"trace_id": "a1f29c...",
"span_id": "91c77d...",
"module": "api",
"function": "calculate",
"line": 88
}Configuration is done through LoggerConfig.
Example:
from src.config import LoggerConfig
config = LoggerConfig(
name="service",
directory="logs",
json_logs=True,
sample_rate=0.1,
level=logging.INFO,
module_levels={
"src.core": logging.WARNING,
"google": logging.ERROR,
"httpx": logging.ERROR,
},
)Key settings
| Field | Meaning |
|---|---|
| json_logs | Output JSON logs for file handler |
| directory | Where log files are stored |
| deterministic | Enable deterministic logging |
| sample_rate | Sampling probability for INFO/DEBUG |
| module_levels | Per-module log level override |
| level | Global log level |
| name | Logger name |
src/
├── logger.py # Logger singleton + ContextLogger
├── config.py # LoggerConfig + JSONFormatter + SamplingFilter
├── core/
│ ├── filters.py # Log filtering logic
│ ├── formatters.py # Log formatting logic
│ └── tracer.py # contextvars trace_id/span_id
└── decorators/
├── functions.py # function_log decorator
└── classes.py # class_log decorator
pip install -r requirements-dev.txtpytest tests/
The service can be deployed as:
Python SDK library
FastAPI microservice
Serverless function (Cloud Run / Lambda / Azure Functions)
Internal data platform component
- Request middleware integration for Flask/FastAPI
- Deterministic sampling by trace_id (avoid partial traces)
- Rotating file handler support
- OpenTelemetry bridge
- Log batching / async writer for high throughput
Contributions are welcome. Please follow the coding standards and submit PRs with tests and documentation updates.
MIT License.
Maintainer: Evan Flores Email: efloresp06@liverpool.com.mx Organization: Liverpool