EventLens is a read‑only dashboard for event‑sourced systems. It connects to your PostgreSQL event store (and optionally Kafka) and gives you:
- Timeline of events for any aggregate
- Search across event types and IDs
- Anomalies based on simple rules
- Export of raw events for debugging / analytics
- Live Event Stream from Kafka (optional)
It never mutates data – it only reads from your database and (optionally) a Kafka topic.
- Expose your events via a Postgres view called
eventlens_eventswith the required columns. - Create an EventLens config (you already have
eventlens.yamlin this repo – use it as a template). - Run the EventLens container (typically via
docker compose), mounting the config file. - Open the UI at
http://localhost:9090and start exploring your events.
The rest of this README explains each step in detail.
EventLens expects a read‑only view (or table) with the following conceptual columns:
event_id– globally unique ID, monotonically increasing per stream (often the primary key).aggregate_id– string ID of the aggregate / entity.aggregate_type– domain type (e.g.ORDER,USER).sequence_number– version within the aggregate’s stream (1, 2, 3, …).event_type– logical event type (e.g.ORDER_PLACED).payload– JSON body of the domain event.metadata– JSON with headers, correlation IDs, etc. ({}is fine).timestamp– event creation time (timestamptzor epoch seconds).global_position– total ordering across all events (often the same asevent_id).
If your existing event table has a different shape, create a view that maps it to this schema. Example:
CREATE OR REPLACE VIEW eventlens_events AS
SELECT
e.id AS event_id,
e.aggregate_id::text AS aggregate_id,
e.aggregate_type AS aggregate_type,
e.version AS sequence_number,
e.event_type AS event_type,
e.json_data AS payload,
'{}'::jsonb AS metadata,
COALESCE(
(e.json_data::jsonb->>'createdDate')::timestamptz,
e.created_at,
CURRENT_TIMESTAMP
) AS timestamp,
e.id AS global_position
FROM your_event_table e;Key rules:
- Keep it read‑only (view only, no triggers).
- Do not change your existing write model – only project into this view.
EventLens loads configuration from one YAML file. In this project you already have an example file:
eventlens.yaml– sample config for local / Docker use
You can place your active config in one of these locations:
- Working directory (easiest for Docker):
./eventlens.yaml - User config:
~/.eventlens/config.yaml - System config:
/etc/eventlens/config.yaml
Below is a minimal configuration you can adapt (Postgres only, no Kafka):
# EventLens Configuration
server:
port: 9090
allowed-origins:
- "http://localhost:9090"
auth:
enabled: false # Turn on + set username/password in shared environments
datasource:
url: jdbc:postgresql://postgres:5432/your_db_name
username: your_user
password: your_password
table: eventlens_events # View created in section 2
columns:
event-id: event_id
aggregate-id: aggregate_id
aggregate-type: aggregate_type
sequence: sequence_number
event-type: event_type
payload: payload
timestamp: timestamp
global-position: global_positionIf you have Kafka and want live updates, add:
kafka:
bootstrap-servers: your-kafka:9092
topic: your-events-topicRecommended Kafka message JSON shape:
{
"event_id": 123,
"aggregate_id": "1ffe55a0-08fa-4109-bec9-55c35dd879a4",
"aggregate_type": "ORDER",
"sequence_number": 3,
"event_type": "ORDER_COMPLETED",
"payload": {
"aggregateId": "1ffe55a0-08fa-4109-bec9-55c35dd879a4",
"version": 3,
"createdDate": "2026-03-14T14:50:50.115861751Z",
"eventType": "ORDER_COMPLETED"
},
"metadata": {},
"timestamp": 1773499850.115862,
"global_position": 123
}Typical pattern:
- After writing to the Postgres event store, publish a Kafka message built from the same event row.
- Make the Kafka JSON match the
eventlens_eventsview fields.
The config can also define replay reducers and anomaly rules. From eventlens.yaml:
replay:
default-reducer: generic # "generic" | classpath
reducers:
# BankAccount: com.myapp.reducers.BankAccountReducer
anomaly:
scan-interval-seconds: 60
rules:
- code: NEGATIVE_BALANCE
condition: "balance < 0"
severity: HIGH
- code: LARGE_WITHDRAWAL
condition: "amount > 10000"
severity: MEDIUMYou can start with the defaults and add rules later as your domain model evolves.
The simplest way to run EventLens in a project is via Docker Compose.
Add a service like this to your existing docker-compose.yml (or create one if needed):
services:
postgres:
image: postgres:17-alpine
environment:
POSTGRES_DB: your_db_name
POSTGRES_USER: your_user
POSTGRES_PASSWORD: your_password
ports:
- "5432:5432"
# Optional: your Kafka stack here...
# kafka:
# image: bitnami/kafka:latest
# ...
eventlens:
image: alphasudo2/eventlens-app:latest
restart: on-failure
environment:
EVENTLENS_CONFIG: /app/eventlens.yaml
volumes:
- ./eventlens.yaml:/app/eventlens.yaml:ro # Mount your config read-only
ports:
- "9090:9090"
depends_on:
postgres:
condition: service_healthy # Wait until postgres is ready
# kafka:
# condition: service_healthy # Uncomment if you use Kafka Live Stream
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:9090/api/health || exit 1"]
interval: 10s
timeout: 5s
retries: 5
start_period: 20sThen start everything:
docker compose up -dOpen the UI in your browser:
http://localhost:9090
These features are backed by Postgres via the eventlens_events view and represent your source of truth:
- Timeline – inspect events for a single aggregate over time.
- Search – find events by type, ID, or filters.
- Anomalies – see events that match anomaly rules from the config.
- Export – download raw events for offline analysis.
To verify everything is wired correctly:
- Go to
http://localhost:9090. - Use the search or timeline view for a known
aggregate_id. - You should see the same events that exist in your Postgres event store.
When kafka is configured and reachable:
- Trigger a new event in your application (e.g. create an order, transfer money, etc.).
- Open the Live Event Stream tab.
- A new row should appear for each new event published to the configured Kafka topic.
You can double‑check Kafka messages manually, for example with kafka-console-consumer, and confirm their JSON matches the schema in section 3.1.
- Postgres (via
eventlens_events) is the system of record. - Kafka is used only for live streaming into the UI.
- Your application is responsible for the dual‑write:
- Write to Postgres.
- Then publish a corresponding Kafka message.
EventLens does not reconcile or repair differences between the database and Kafka.
-
Authentication
In theserver.authblock you can enable basic auth:server: auth: enabled: true username: admin password: changeme
⚠️ HTTPS required for Basic Auth in production.
Basic Auth transmits credentials as Base64-encoded HTTP headers. Without TLS, they are readable in transit. Use a reverse proxy (nginx, Traefik, Caddy) with HTTPS in front of EventLens whenauth.enabled: true.Use stronger credentials and secrets management in real deployments.
-
CORS / Frontend access
Restrictserver.allowed-originsin production to the domains that should reach your dashboard. -
API request limits
All list endpoints automatically cap results at 1,000 records per request on the server side (timeline, search, recent events, anomaly scan). Uselimit+offsetpagination for larger datasets. -
Config locations
For non‑Docker environments, place the YAML config in one of:./eventlens.yaml~/.eventlens/config.yaml/etc/eventlens/config.yaml
Use this checklist when adding EventLens to any event‑sourced system:
- Database
- Identify your existing event table(s).
- Create a Postgres view named
eventlens_eventswith the required columns.
- Config
- Copy
eventlens.yamlinto your project and adjust:-
datasource.url,username,password. -
datasource.table(usuallyeventlens_events). -
columnsmappings, if your column names differ.
-
- (Optional) Add
kafka.bootstrap-serversandkafka.topic. - (Optional) Configure
anomalyrules andreplayreducers.
- Copy
- Runtime
- Add the
eventlensservice todocker-compose.yml(or equivalent). - Mount the config file into
/app/eventlens.yaml. - Expose port
9090.
- Add the
- Verification
- Start containers:
docker compose up -d. - Open
http://localhost:9090. - Confirm timeline/search show events from your Postgres event store.
- (If using Kafka) Confirm Live Event Stream updates when new events occur.
- Start containers:
Once this checklist passes, your team has a self‑service event debugger they can rely on for day‑to‑day diagnostics, investigations, and domain exploration.
EventLens is a read-only, zero-intrusion visual debugger for PostgreSQL event stores (with optional Kafka live tail). It lets you:
- inspect aggregate timelines,
- replay state at any point in time,
- run bisect-style searches,
- detect anomalies,
- and stream new events live via WebSocket + React UI.
This document focuses on a 5-minute quick-start, configuration via eventlens.yaml, and production-friendly DB/index guidance.
- JDK 21+
- Node.js 18+ (only needed if you are building from source)
- Docker (for PostgreSQL + Kafka via
docker-compose) - Git and Gradle wrapper (
./gradlew) are already in the repo
From the project root:
docker compose up -dThis starts:
postgres:16onlocalhost:5432with:- database:
eventlens_dev - user:
postgres - password:
secret
- database:
kafkaonlocalhost:9092
Note: The
appservice now waits for bothpostgresandkafkato pass their health checks before starting (usingdepends_on: condition: service_healthy). This means the first start may take ~15–30 seconds while Kafka initialises.
If seed.sql is present in the repo, it is applied automatically to the eventlens_dev database at container start.
From the project root:
./gradlew clean :eventlens-app:shadowJarThis will:
- build all core modules,
- run the React/Vite build (
eventlens-ui→eventlens-api/src/main/resources/web), - assemble
eventlens-app/build/libs/eventlens.jar.
From the project root:
java --enable-preview -jar eventlens-app/build/libs/eventlens.jarThen open the UI in your browser:
http://localhost:9090
The server reads configuration from eventlens.yaml (see below) using the server.port and datasource settings.
Build the production image:
docker build -t eventlens:latest .Run it, pointing to the config file:
docker run --rm \
-p 9090:9090 \
-v "$(pwd)/eventlens.yaml:/app/eventlens.yaml:ro" \
--name eventlens \
eventlens:latestThe UI will be available at http://localhost:9090.
EventLens is configured via a YAML file. The repository includes a sample eventlens.yaml at the root.
The effective search order is:
EVENTLENS_CONFIGenv var (if set), e.g./app/eventlens.yaml./eventlens.yamlin the working directory- User-level config:
~/.eventlens/config.yaml - System-level config:
/etc/eventlens/config.yaml
server:
port: 9090
allowed-origins:
- "http://localhost:5173" # Vite dev server
- "http://localhost:9090" # Embedded server
auth:
enabled: false
username: admin
password: changemeport: HTTP server port for the API and bundled UI.allowed-origins: CORS whitelist for the browser UI.- In development, using
"*"is acceptable. - In production, restrict this to your real UI origins.
- In development, using
auth.enabled: Iftrue, basic auth is enforced on/api/*routes.
datasource:
url: jdbc:postgresql://localhost:5432/eventlens_dev
username: postgres
password: secret
table:url: JDBC URL for the event store database.username/password: Credentials for a read-only Postgres role (see section 3).table: Optional. If omitted, EventLens attempts to auto-detect the events table schema.
kafka:
bootstrap-servers: localhost:9092
topic: domain-events- Remove or comment out the
kafkasection to disable Kafka. - When enabled, EventLens uses
KafkaEventMapperandKafkaLiveTailto stream events into the UI.
replay:
default-reducer: generic # "generic" | classpath
reducers:
# BankAccount: com.myapp.reducers.BankAccountReducerdefault-reducer:generic: Use the built-in JSON-merge reducer.classpath: Load custom reducers from the classpath (seeClasspathReducerLoader).
reducers: Optional mapping from aggregate type → fully qualified reducer class.
anomaly:
scan-interval-seconds: 60
rules:
- code: NEGATIVE_BALANCE
condition: "balance < 0"
severity: HIGH
- code: LARGE_WITHDRAWAL
condition: "amount > 10000"
severity: MEDIUMscan-interval-seconds: Background anomaly scan rate.- Each rule:
code: Stable identifier for the anomaly.condition: Expression evaluated against replayed state (sanitized by the bisect parser).severity: Enum such asLOW,MEDIUM,HIGH.
ui:
theme: dark # dark | lightFor production, EventLens should connect using a read-only database user.
Assuming your primary owner role is app_owner and the events are in schema public:
-- 1) Create a dedicated read-only role
CREATE ROLE eventlens_ro LOGIN PASSWORD 'strong_password_here';
-- 2) Grant connect on the database
GRANT CONNECT ON DATABASE eventlens_dev TO eventlens_ro;
-- 3) Grant usage on relevant schemas
GRANT USAGE ON SCHEMA public TO eventlens_ro;
-- 4) Grant select on existing tables
GRANT SELECT ON ALL TABLES IN SCHEMA public TO eventlens_ro;
-- 5) Ensure future tables are also selectable
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO eventlens_ro;Then update eventlens.yaml:
datasource:
url: jdbc:postgresql://your-prod-host:5432/your_prod_db
username: eventlens_ro
password: strong_password_hereEventLens primarily runs read-only queries against your event store. To keep replays and timelines fast, ensure:
-
Primary event table indexes
For a typical event store table:
CREATE TABLE domain_events ( id BIGSERIAL PRIMARY KEY, aggregate_id TEXT NOT NULL, aggregate_type TEXT NOT NULL, sequence BIGINT NOT NULL, occurred_at TIMESTAMPTZ NOT NULL, payload JSONB NOT NULL );
Recommended indexes:
-- Lookup all events for a single aggregate, in order CREATE INDEX IF NOT EXISTS idx_domain_events_agg_seq ON domain_events (aggregate_type, aggregate_id, sequence); -- Time-based queries / live windows CREATE INDEX IF NOT EXISTS idx_domain_events_occurred_at ON domain_events (occurred_at);
-
Partial / functional indexes (optional)
If EventLens frequently searches or filters on JSON fields (e.g.
payload->>'status'), consider:CREATE INDEX IF NOT EXISTS idx_domain_events_status ON domain_events ((payload->>'status'));
-
Analyze and vacuum
Ensure autovacuum and auto-analyze are functioning, or run:
ANALYZE domain_events;
- Health endpoint:
GET /api/health- Verifies:
- DB connectivity,
- event store statistics,
- Kafka consumer status (if enabled).
- Verifies:
- Logging:
- Controlled via
logback.xmlineventlens-core. - Includes:
- connection events,
- replay durations,
- anomaly results,
- WebSocket lifecycle events.
- Controlled via
In containerized environments, direct logs to stdout/stderr and collect them via your platform (Kubernetes, ECS, Docker logs, etc.).
-
Run only the UI in dev mode:
cd eventlens-ui npm install npm run devThen point the dev UI at an already running API server (
server.allowed-originsshould include the Vite origin). -
Testcontainers-based tests:
- Integration tests for PostgreSQL and Kafka use Testcontainers and require Docker to be running.
- Configure a read-only Postgres user in your environment.
- Add or adjust indexes to match your event table shape.
- Enable basic auth behind an HTTPS reverse proxy in
eventlens.yamlfor production deployments. - Build and run the Docker image alongside your own PostgreSQL/Kafka infrastructure, or reuse the provided
docker-compose.ymlfor local debugging.
| File | Description |
|---|---|
| LICENSE | MIT License |
| CHANGELOG.md | Version history and release notes |
| CONTRIBUTING.md | Build, test, and PR guidelines |
| eventlens.yaml.example | Annotated config template |