Logwell is a self-hosted logging platform with real-time streaming, full-text search, and OTLP-compatible ingestion. Deploy in minutes, own your data.
Alpha Software — Expect breaking changes. Not recommended for production workloads.
Features • Quick Start • Usage • Deploy • Contributing • License
Logwell is a lightweight, self-hosted log aggregation platform for developers who want structured logging without the complexity of ELK or the costs of cloud services.
Use Logwell when you need:
- A simple logging backend for your side project or startup
- Full-text search across logs without managing Elasticsearch
- Real-time log streaming during development and debugging
- Complete data ownership with no vendor lock-in
Logwell is NOT for:
- High-volume production systems (10k+ logs/second) — use Loki or Clickhouse
- Teams needing RBAC, audit trails, or compliance features — use a managed service
- Distributed tracing or metrics — Logwell is logs-only
- OTLP-native ingestion — Standard OpenTelemetry protocol, no proprietary SDKs required
- PostgreSQL backend — Full-text search via tsvector, no separate search cluster needed
- Real-time streaming — SSE-powered live log tailing with batching
- Project isolation — Per-project API keys with separate log streams
- Zero telemetry — No phone-home, no tracking, fully air-gapped deployments supported
- Clean UI — Minimal interface with dark mode and log level color coding
Multi-project dashboard with log counts
Real-time log viewer with level filtering |
Quick Start with pre-filled API key |
API key management and code snippets |
Log level distribution analytics |
| vs | Logwell advantage |
|---|---|
| Loki/Grafana | Built-in UI, no LogQL to learn, just PostgreSQL |
| ELK | Lightweight PostgreSQL backend, not Elasticsearch |
| Datadog/etc | Self-hosted, no per-GB pricing, own your data |
| Layer | Technology |
|---|---|
| Framework | SvelteKit |
| Database | PostgreSQL |
| ORM | Drizzle |
| Auth | better-auth |
| UI | shadcn-svelte + Tailwind CSS v4 |
| Real-time | Server-Sent Events |
| Runtime | Bun |
# Clone the repository
git clone https://github.com/divkix/logwell.git
cd logwell
# Install dependencies
bun install
# Set up environment
cp .env.example .env
# Edit .env with your values (see Environment Variables below)
# Start PostgreSQL
docker compose up -d
# Push database schema
bun run db:push
# Create admin user
bun run db:seed
# Start development server
bun run devOpen http://localhost:5173 and sign in with:
- Username:
admin(or yourADMIN_USERNAMEfrom.env) - Password: Your
ADMIN_PASSWORDfrom.env
Note: Development runs on port 5173 (Vite). Production builds run on port 3000.
Create a .env file with the following:
# Database connection
DATABASE_URL="postgres://root:mysecretpassword@localhost:5432/local"
# Authentication secret (minimum 32 characters)
BETTER_AUTH_SECRET="your-32-character-secret-key-here"
# Admin user password (minimum 8 characters)
ADMIN_PASSWORD="your-admin-password"
# Admin username (optional, defaults to "admin")
# ADMIN_USERNAME="admin"
# Production URL (required for auth behind reverse proxies)
ORIGIN="https://your-domain.com"
# Log retention (optional, defaults shown)
# LOG_RETENTION_DAYS="30" # 0 = never auto-delete
# LOG_CLEANUP_INTERVAL_MS="3600000" # Cleanup job interval (1 hour)Generate a secure secret:
openssl rand -base64 32- Sign in to the dashboard
- Click New Project
- Enter a project name
- Copy the generated API key (
lw_...)
Logwell provides two ingestion APIs:
| API | Endpoint | Best For |
|---|---|---|
| Simple API | POST /v1/ingest |
Quick integration, any HTTP client |
| OTLP API | POST /v1/logs |
OpenTelemetry SDKs, rich metadata |
The simple API accepts flat JSON with minimal boilerplate:
curl -X POST http://localhost:5173/v1/ingest \
-H "Authorization: Bearer lw_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"level": "info", "message": "User signed in"}'Batch multiple logs:
curl -X POST http://localhost:5173/v1/ingest \
-H "Authorization: Bearer lw_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '[
{"level": "info", "message": "Request started"},
{"level": "error", "message": "Database timeout", "metadata": {"query": "SELECT..."}}
]'Available fields:
| Field | Required | Type | Description |
|---|---|---|---|
level |
Yes | debug | info | warn | error | fatal |
Log severity |
message |
Yes | string | Log message |
timestamp |
No | ISO8601 string | Defaults to current time |
service |
No | string | Service name for filtering |
metadata |
No | object | Additional structured data |
Node.js (no SDK needed)
await fetch('http://localhost:5173/v1/ingest', {
method: 'POST',
headers: {
'Authorization': 'Bearer lw_YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({ level: 'info', message: 'Hello from Node.js' })
});Python (no SDK needed)
import requests
requests.post('http://localhost:5173/v1/ingest',
headers={'Authorization': 'Bearer lw_YOUR_API_KEY'},
json={'level': 'info', 'message': 'Hello from Python'})Go (no SDK needed)
body := []byte(`{"level": "info", "message": "Hello from Go"}`)
req, _ := http.NewRequest("POST", "http://localhost:5173/v1/ingest", bytes.NewBuffer(body))
req.Header.Set("Authorization", "Bearer lw_YOUR_API_KEY")
req.Header.Set("Content-Type", "application/json")
http.DefaultClient.Do(req)For Node.js, browsers, and edge runtimes (Cloudflare Workers, etc.):
npm install logwellimport { Logwell } from 'logwell';
const logger = new Logwell({
apiKey: 'lw_YOUR_API_KEY',
endpoint: 'http://localhost:5173',
});
// Log at different levels
logger.info('User signed in', { userId: '123' });
logger.error('Database failed', { host: 'db.local' });
// Flush before shutdown
await logger.shutdown();Features: Zero dependencies, automatic batching, retry with backoff, TypeScript-first.
Deno users: Install from JSR with
deno add jsr:@divkix/logwelland import from@divkix/logwell
For applications already using OpenTelemetry, point your OTLP log exporter to POST /v1/logs with your API key in the Authorization header.
Node.js / TypeScript
npm install @opentelemetry/exporter-logs-otlp-http @opentelemetry/sdk-logsimport { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http';
import { LoggerProvider, BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
const exporter = new OTLPLogExporter({
url: 'http://localhost:5173/v1/logs',
headers: { 'Authorization': 'Bearer lw_YOUR_API_KEY' },
});
const loggerProvider = new LoggerProvider();
loggerProvider.addLogRecordProcessor(new BatchLogRecordProcessor(exporter));Python
pip install opentelemetry-exporter-otlp-proto-httpfrom opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
exporter = OTLPLogExporter(
endpoint="http://localhost:5173/v1/logs",
headers={"Authorization": "Bearer lw_YOUR_API_KEY"},
)
logger_provider = LoggerProvider()
logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))Go
go get go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttpimport (
"go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
"go.opentelemetry.io/otel/sdk/log"
)
exporter, _ := otlploghttp.New(ctx,
otlploghttp.WithEndpointURL("http://localhost:5173/v1/logs"),
otlploghttp.WithHeaders(map[string]string{
"Authorization": "Bearer lw_YOUR_API_KEY",
}),
)
processor := log.NewBatchProcessor(exporter)
provider := log.NewLoggerProvider(log.WithProcessor(processor))Java
<!-- Maven -->
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
</dependency>import io.opentelemetry.exporter.otlp.http.logs.OtlpHttpLogRecordExporter;
OtlpHttpLogRecordExporter exporter = OtlpHttpLogRecordExporter.builder()
.setEndpoint("http://localhost:5173/v1/logs")
.addHeader("Authorization", "Bearer lw_YOUR_API_KEY")
.build();C# / .NET
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocolusing OpenTelemetry;
using OpenTelemetry.Exporter;
builder.Logging.AddOpenTelemetry(logging =>
logging.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://localhost:5173/v1/logs");
options.Protocol = OtlpExportProtocol.HttpProtobuf;
options.Headers = "Authorization=Bearer lw_YOUR_API_KEY";
}));Logwell derives some UI fields from common OTLP log attributes (if present):
| UI field | Preferred OTLP attribute keys |
|---|---|
sourceFile |
code.filepath, source.file |
lineNumber |
code.lineno, source.line |
requestId |
request.id, http.request_id |
userId |
enduser.id, user.id |
ipAddress |
client.address, net.peer.ip, net.sock.peer.addr |
| Command | Description |
|---|---|
bun run dev |
Start development server |
bun run build |
Build for production |
bun run preview |
Preview production build |
bun run check |
Run TypeScript checks |
bun run lint |
Run linter |
bun run lint:fix |
Fix lint issues |
| Command | Description |
|---|---|
bun run db:start |
Start PostgreSQL via Docker |
bun run db:push |
Push schema to database |
bun run db:generate |
Generate migration files |
bun run db:migrate |
Run migrations |
bun run db:studio |
Open Drizzle Studio |
bun run db:seed |
Create admin user |
| Command | Description |
|---|---|
bun run test |
Run all tests |
bun run test:unit |
Run unit tests |
bun run test:integration |
Run integration tests |
bun run test:component |
Run component tests |
bun run test:e2e |
Run E2E tests (Playwright) |
bun run test:coverage |
Run tests with coverage |
bun run test:ui |
Open Vitest UI |
Fly.io: Clone the repo and run fly launch (uses included fly.toml)
Note: PostgreSQL database required. Railway has $5/mo free credit. Render free tier expires after 30 days (paid plans from $7/mo). Fly.io offers 1GB free PostgreSQL.
The easiest way to deploy Logwell with PostgreSQL:
# Set required environment variables
export BETTER_AUTH_SECRET=$(openssl rand -base64 32)
export ADMIN_PASSWORD="your-secure-admin-password"
# Optional: Set custom DB password (only needed if exposing port 5432 for backups)
# export DB_PASSWORD="your-db-password"
# Start the full stack
docker compose -f compose.prod.yaml up -d
# View logs
docker compose -f compose.prod.yaml logs -f app
# Stop the stack
docker compose -f compose.prod.yaml downIf you have an external PostgreSQL database:
# Pull from GitHub Container Registry
docker pull ghcr.io/divkix/logwell:latest
# Run the container
docker run -p 3000:3000 \
-e DATABASE_URL="postgresql://user:pass@host:5432/db" \
-e BETTER_AUTH_SECRET="your-32-char-secret" \
-e ADMIN_PASSWORD="your-admin-password" \
-e NODE_ENV=production \
ghcr.io/divkix/logwell:latest# Build the image
docker build -t logwell .
# Run the container
docker run -p 3000:3000 \
-e DATABASE_URL="postgresql://user:pass@host:5432/db" \
-e BETTER_AUTH_SECRET="your-32-char-secret" \
-e ADMIN_PASSWORD="your-admin-password" \
-e NODE_ENV=production \
logwellThe app exposes a health check endpoint for monitoring:
curl http://localhost:3000/api/healthResponse:
{
"status": "healthy",
"database": "connected",
"timestamp": "2025-01-02T12:00:00.000Z",
"uptime": 3600,
"version": "0.1.2"
}- Returns
200 OKwhen healthy - Returns
503 Service Unavailablewhen database is down
bun run build
bun ./build/index.jsThe app runs on port 3000 by default.
| Endpoint | Method | Description |
|---|---|---|
/api/health |
GET | Health check with database status |
| Endpoint | Method | Description |
|---|---|---|
/v1/ingest |
POST | Simple JSON log ingestion |
/v1/logs |
POST | OTLP/HTTP JSON log export |
| Endpoint | Method | Description |
|---|---|---|
/api/projects |
GET | List all projects |
/api/projects |
POST | Create project |
/api/projects/[id] |
GET | Get project details |
/api/projects/[id] |
PATCH | Update project (name, retention) |
/api/projects/[id] |
DELETE | Delete project |
/api/projects/[id]/regenerate |
POST | Regenerate API key |
/api/projects/[id]/logs |
GET | Query logs |
/api/projects/[id]/logs/stream |
POST | SSE stream |
/api/projects/[id]/stats |
GET | Level distribution |
Important: Logwell is alpha software. Evaluate these limitations before deploying.
| Limitation | Impact | Workaround |
|---|---|---|
| Single-user auth | No team collaboration | Share credentials (not recommended) |
| No log export | Can't backup to S3/file | Direct database dumps via pg_dump |
| No rate limiting | API keys have unlimited access | Implement at reverse proxy level |
- Always use TLS — Run behind a reverse proxy (nginx, Caddy) with HTTPS in production
- Protect API keys — Treat
lw_*keys as secrets; they grant write access to your logs - Network isolation — Consider firewall rules to restrict
/v1/logsaccess to known sources
| Issue | Solution |
|---|---|
| Database connection refused | Ensure PostgreSQL is running: docker compose up -d |
| Admin seed fails | Check ADMIN_PASSWORD is at least 8 characters |
| Auth errors | Verify BETTER_AUTH_SECRET is at least 32 characters |
| Port 5432 in use | Stop other PostgreSQL instances or change port in compose.yaml |
Using Logwell? Add the badge to your project:
[](https://github.com/divkix/logwell)Contributions are welcome! Here's how to get started:
# Fork and clone the repo
git clone https://github.com/YOUR_USERNAME/logwell.git
cd logwell
# Install dependencies
bun install
# Start dev environment
docker compose up -d
bun run db:push
bun run devBefore submitting a PR:
- Run
bun run check(TypeScript) - Run
bun run lint(Biome) - Run
bun run test(Vitest) - Use conventional commits (
feat:,fix:,docs:, etc.)
Report bugs: GitHub Issues




