Userspace-assisted DLP sensor: on Linux, attach eBPF uprobes to OpenSSL SSL_write, copy outbound TLS plaintext chunks to a ring buffer, and scan them with Aho–Corasick substring matching. When a policy pattern matches, the tool logs an alert and a redacted copy of the buffer—useful for catching secrets and PII before they reach AI APIs (or any TLS peer using libssl).
The policy engine, rolling-buffer aggregation, allowlist, dedupe, and alert pipeline live in internal/sensorcore. On Linux, cleartext from eBPF is the primary path; optional pluggable feeds (localhost ingest, stdin JSONL, hookwire socket) can still call HandleChunk / HandleChunkWithMeta for tests and auxiliary capture—see Platforms.
Scope: Linux only for spectral-mesh: full observation and logging with redaction in Go; optional kernel-side blocking (-enable-bpf-blocking) can wipe matching plaintext in the outbound buffer before it is sent (keep it off unless you explicitly want that behavior). The separate spectral-edge binary (Network edge) remains plain Go and can be built for multiple OSes for HTTP-body inspection.
- How it works
- Platforms
- Requirements
- Build
- Run
- CLI flags and environment
- Policy patterns
- BPF blocking and the verifier
- Logging and metrics
- Try it
- Docker
- Network edge (spectral-edge)
- Repository layout
- Testing, CI, and load benchmarks
- HTTPS flood script (no API key)
- OpenAI flood script (live API)
- Simulation scripts
- Tests layout
- Discovery — Scan
$HOST_PROC(default/proc) for*/mapsentries listing mappedlibssl.so(any 1.1 / 3.x), resolve paths under each process root, and attach a uprobe onSSL_writeonce per library inode. Discovery repeats every 10 seconds to pick up new processes. - Capture — The BPF program reserves a ringbuf slot, copies up to 512 bytes from the user buffer, records the chunk length, and submits PID and
comm. If ring buffer reservation fails, a per-CPU kernel counter is incremented (exposed as a Prometheus gauge). - Match — Go appends each chunk to a per-PID rolling buffer (last ~4 KiB) and runs Aho–Corasick on that buffer. That way a sensitive substring split across multiple
SSL_writecalls—common with HTTP/2 (small TLS records)—can still match. Match cost scales with rolling-buffer size, not naïvely with pattern count alone. Rolling state for exited PIDs is pruned on the same 10-second tick as discovery. Optional allowlist (-allowlist) skips alerts by process name prefix; dedupe (-alert-dedupe-window) suppresses repeat PID+rule alerts in a time window. - Redact — On match, overlapping runs are masked with
*in the logged copy. If-enable-bpf-blockingis on and a rule hasaction: block, the eBPF program may overwrite the user buffer (seedocs/PACKAGING.mdand BPF blocking below).
| OS | Capture | Blocking | Notes |
|---|---|---|---|
| Linux | eBPF uprobes on SSL_write + ring buffer; optional ingest bridge (-capture-ingest-addr), stdin JSONL, or hookwire (-capture-hook-socket) |
Optional -enable-bpf-blocking (kernel buffer wipe) |
Sources: main_linux.go, discover_linux.go, bpf/scrubber.c. |
Demo (ingest only, no uprobes required for the demo path): Terminal A: spectral-mesh -policy policy.json -capture-ingest-addr 127.0.0.1:9092 — Terminal B: go run ./cmd/spectral-capture-demo -ingest-url http://127.0.0.1:9092/v1/ingest/chunk — expect policy_alert on stdout. Optional -capture-ingest-token / Authorization: Bearer for hardening. (:9092 avoids clashing with Prometheus in monitoring/docker-compose.yml on host :9091.)
-capture-hook-socket unix:… or tcp:… accepts a binary stream documented in internal/capture/hookwire (36-byte little-endian header, max 128 KiB per chunk, then raw TLS plaintext bytes). Use this for lab tests or a custom in-process sender paired with the mesh.
Not covered: TLS stacks without a visible SSL_write uprobe target (e.g. some static builds), BoringSSL inside Chrome, QUIC, or non-OpenSSL TLS implementations without a separate capture path.
For HTTP-body policy testing on any OS without host TLS capture, use spectral-edge (docs/EDGE.md).
- Kernel with BTF available (e.g.
/sys/kernel/btf/vmlinux) for CO-RE-style builds. - Toolchain:
clang,llvm,libbpfdevelopment headers;bpftoolto generatebpf/vmlinux.h. - Go: version compatible with
go.mod(see repo root). - Privileges: attaching uprobes and loading BPF typically requires appropriate capabilities (often
rootorCAP_SYS_ADMIN/CAP_BPFdepending on the system).
libssl: Probes target the SSL_write symbol in whatever libssl.so processes map (e.g. .so.3 on RHEL 9). Mismatched or statically linked OpenSSL will not be seen.
Generate the kernel BTF header (once per machine or after kernel changes):
bpftool btf dump file /sys/kernel/btf/vmlinux format c > bpf/vmlinux.hspectral_bpf_generate.go holds //go:generate directives (not behind GOOS tags) so go generate ./... can run from CI or any host with a suitable clang + libbpf toolchain (often Linux or make generate-docker). bpf2go compiles bpf/scrubber.c for amd64 and arm64 BPF targets. Generated artifacts look like spectral_*_bpfel.go and matching .o files (do not edit). The directives pass -no-strip so a host llvm-strip binary is not required.
On Linux (Fedora/RHEL: dnf install clang llvm libbpf-devel; Debian/Ubuntu: apt install clang llvm libbpf-dev):
make generate # or: BPF2GO_CFLAGS=... go generate ./...
make build # CGO_ENABLED=1 go build -o spectral-mesh .Or in one step:
make # runs generate then buildIf your laptop is not a Linux box with libbpf headers, run generation inside Docker (requires Docker Desktop or a compatible engine):
make generate-dockerThe Makefile uses an Ubuntu 24.04 image (GO_GEN_IMAGE, overridable) with clang 18 and GOTOOLCHAIN=auto so the Go version matches go.mod. Build the Linux sensor with CGO_ENABLED=1 on a Linux machine (or image) that has libbpf and clang.
Pass -ldflags with Version, GitCommit, and BuildTime the same way as make build when you need embedded build metadata in /version and logs.
rm -f spectral-mesh spectral_*_bpfel.go spectral_*_bpfel.osudo ./spectral-meshWith Prometheus metrics on port 9090 and a policy file:
sudo ./spectral-mesh -metrics-addr :9090 -policy /path/to/policy.jsonOn startup you should see JSON logs including discover_start and uprobe_attached for hooked libssl paths; leave it running and generate TLS traffic from processes using OpenSSL (e.g. curl, many language HTTP clients).
| Flag | Meaning |
|---|---|
-policy |
Path to a JSON policy file (see Policy patterns). If omitted, built-in defaults from internal/policy are used. |
-metrics-addr |
HTTP listen address for Prometheus metrics plus /healthz, /readyz, and /version (for example :9090). Empty disables the HTTP server. |
-log-level |
debug, info, warn, or error. |
-ringbuf-bytes |
BPF ring buffer size in bytes (power of two, multiple of page size; 0 = 16 MiB). Tune if spectral_mesh_ringbuf_reserve_drops_total grows. |
-log-redacted-preview |
If false, policy_alert logs omit the redacted_preview field (metadata only). |
-alert-max-per-window |
Max logged alerts per -alert-window across all PIDs (0 = unlimited). |
-alert-max-per-pid-per-window |
Max logged alerts per PID per window (0 = unlimited). |
-alert-window |
Window for alert caps (default 1m). |
-alert-dedupe-window |
Suppress duplicate alerts for the same PID+rule within this duration (0 = off). |
-allowlist |
JSON file with comm_prefixes to skip alerts by process name (host sensor). The same file may include user_agent_prefixes for spectral-edge (see docs/EDGE.md). |
-enable-bpf-blocking |
Load action: block rules into BPF (may wipe outbound TLS plaintext); default off. |
-k8s-enrich |
Linux only: add k8s_namespace, k8s_pod, k8s_pod_uid to policy_alert when NODE_NAME is set and the pod has in-cluster RBAC to list/watch pods on that node. |
-log-payload-sha256 |
Add payload_sha256 of the rolling plaintext buffer to policy_alert logs. |
-version |
Print version, git_commit, and build_time, then exit (values set when built via make). |
| Environment | Meaning |
|---|---|
LOG_LEVEL |
Default for -log-level when you do not pass the flag. |
HOST_PROC |
Root used instead of /proc for discovery and PID pruning (no trailing slash). |
NODE_NAME |
Linux mesh with -k8s-enrich: Kubernetes node name (downward API spec.nodeName) for pod informer filtering. |
Policy reload: if -policy points to a file, send SIGHUP to reload patterns without restarting (see docs/RUNBOOK.md).
Build identity: make injects version, git_commit, and build_time into the binary. They appear in the sensor_active log line, on GET /version (JSON, when -metrics-addr is set), and via -version. For what data leaves the host in logs and metrics, see docs/DATA_HANDLING.md.
- Built-in defaults are defined in
internal/policy(DefaultPatterns): API keys, PEM markers,Bearer/Authorization:, cloud-style env markers, common token prefixes, PII phrases, and classification text—plus sample strings such asProject EthosandInternal-Secret. - External policy — Prefer v2 JSON with
schema_versionandrules(id,pattern,severity,action:observeorblock). Legacypatternsarrays still work.
{
"schema_version": 2,
"rules": [
{"id": "ex", "pattern": "Bearer ", "severity": "high", "action": "observe"}
]
}See policy.example.json. Use -policy /path/to/file.json (no rebuild required). Pattern count and length are capped (internal/policy limits). action: block patterns are limited to 16 bytes and matched in-kernel only in the first ~32 bytes of each captured TLS chunk (verifier-safe bounds); use observe for longer needles (full userspace rolling buffer).
Packaging: systemd example under packaging/systemd/, notes in docs/PACKAGING.md. Helm charts and cloud Terraform are not maintained in this repository; use the Dockerfiles, systemd unit, and runbooks here as a base. Example allowlist: allowlist.example.json.
Linux only. Kernel blocking is off by default (-enable-bpf-blocking). When enabled on Linux, action: block rules are loaded into BPF; on a match, the program zeros the outbound user buffer (in fixed 64-byte chunks and 1-byte tails so the Linux BPF verifier accepts bpf_probe_write_user sizes).
Semantics differ from userspace matching: in-kernel substring search uses short patterns (≤ 16 bytes) and a small scan window in each captured chunk (see bpf/scrubber.c). Userspace still runs the full rolling buffer and longer patterns for observe rules.
Changing bpf/scrubber.c requires go generate ./... and a rebuild; load failures usually mean the verifier rejected the program (bounds, loop complexity, or helper arguments). CI compiles BPF on every push so broken C is caught early.
- Logs are JSON to stdout (
log/slog), suitable for shipping to a log stack or SIEM. Alerts use thepolicy_alertmessage withschema_version,rule_id,matched_rule_ids,severity,action,enforcement_mode(how this OS applies block rules),bpf_blocking_enabled,kernel_wipe_eligible,kernel_wipe_applied, pluscomm,pid, optionalredacted_preview/payload_sha256. - Metrics (when
-metrics-addris set) use Prometheus naming, including:spectral_mesh_ringbuf_events_total— chunks delivered to the policy engine (primarily from the BPF ring buffer; also increments for optional ingest/hook paths that callHandleChunk)spectral_mesh_policy_alerts_total— policy matches actually logged (after rate limiting)spectral_mesh_policy_alerts_rate_limited_total— matches suppressed by rate limitsspectral_mesh_ringbuf_reserve_drops_total— Linux: kernel-sidebpf_ringbuf_reservefailures; other OSes: always0spectral_mesh_tls_blocked_writes_kernel_total— Linux: kernel-side plaintext wipes when blocking is enabled; other OSes: always0spectral_mesh_policy_alert_dedupe_suppressed_total— matches suppressed by the dedupe windowspectral_mesh_policy_block_match_observe_only_total— block-rule matches while the kernel wipe path is inactive (e.g. with-enable-bpf-blockingoff)spectral_mesh_bpf_blocking_enabled—1or0spectral_mesh_policy_block_rules_configured— number of block rules loaded toward the BPF limit (up to 8)spectral_mesh_uprobe_attach_success_total,spectral_mesh_uprobe_attach_open_errors_total,spectral_mesh_uprobe_attach_probe_errors_totalspectral_mesh_roll_buffer_pids_pruned_total,spectral_mesh_ringbuf_read_errors_totalspectral_mesh_policy_reloads_total,spectral_mesh_policy_reload_errors_total
With -metrics-addr :9090 (change host/port if you use another address), from the same machine:
# Full Prometheus text exposition
curl -sS http://127.0.0.1:9090/metrics
# Only this binary’s metrics (names prefixed with spectral_mesh_)
curl -sS http://127.0.0.1:9090/metrics | grep '^spectral_mesh'
# Liveness / readiness (HTTP 200 when ready)
curl -sS -o /dev/null -w '%{http_code}\n' http://127.0.0.1:9090/healthz
curl -sS -o /dev/null -w '%{http_code}\n' http://127.0.0.1:9090/readyz
# Build metadata (JSON)
curl -sS http://127.0.0.1:9090/versioncurl: (7) Failed to connect means nothing is listening on that port—usually because the process was started without -metrics-addr, or you are curling from another host while the listener is only on the sensor machine (or inside a pod: use kubectl port-forward to 9090). Confirm with ss -tlnp | grep 9090 after starting:
sudo ./spectral-mesh -metrics-addr :9090
Terminal 1
sudo ./spectral-meshTerminal 2 — Any HTTPS request whose plaintext (after TLS) contains a forbidden substring should trigger an alert. Example (replace URL and body with something that includes a test pattern from your policy):
curl -sS -X POST https://example.com/ \
-H "Content-Type: application/json" \
-d '{"prompt": "Details on Project Ethos"}' >/dev/nullIf you see nothing, confirm Terminal 1 logged uprobe_attached for your host’s OpenSSL, and that you rebuilt after BPF changes (go generate + rebuild). For a deterministic single-request test you can force HTTP/1.1 (larger TLS writes): curl --http1.1 ....
curl: (60) SSL certificate problem means OpenSSL could not validate the server chain against your trust store (missing or outdated CA bundle, custom MITM proxy, or minimal container image). Fix the trust store first—on Fedora/RHEL install or refresh CAs: sudo dnf install -y ca-certificates and ensure /etc/pki/tls/certs/ca-bundle.crt exists. Point curl at it explicitly if needed: curl --cacert /etc/pki/tls/certs/ca-bundle.crt .... Lab-only workaround: curl -k skips verification (insecure); TLS still encrypts, but you lose server authentication.
For vendor APIs you need valid credentials and endpoints; the mesh does not depend on a specific provider.
Multi-stage image (UBI 9) builds the binary and ships spectral-mesh under /usr/local/bin/. Building:
docker build -t spectral-mesh .Running this meaningfully still requires host permissions for BPF and access to host /proc and libraries—typical deployments use a privileged DaemonSet or equivalent, not an isolated default bridge container. Pass flags such as -metrics-addr :9090 via container args if you want metrics inside the pod.
For HTTP(S) after TLS termination (reverse proxy, load balancer, or service mesh), spectral-edge applies the same policy JSON to request bodies, logs policy_alert with sensor_kind: edge, and can reverse-proxy to -upstream after scanning. No BPF and no CGO.
make edge
./spectral-edge -listen 127.0.0.1:8080
# optional: TLS on the listen socket
./spectral-edge -listen :8443 -tls-cert /path/to/cert.pem -tls-key /path/to/key.pem
# optional: shared secret for all routes except /healthz and /readyz
./spectral-edge -api-key "$SPECTRAL_EDGE_API_KEY" ...Distribution: Dockerfile.edge builds a distroless image for spectral-edge only (the root Dockerfile is spectral-mesh). Pushing a v* tag runs .github/workflows/release-edge.yml (multi-arch tarballs, checksums, go version -m text, GHCR image). Details: docs/EDGE.md.
Highlights: configurable -scan-path (default /v1/scan), -proxy-mount-path to scope which URL prefixes are scanned, -trusted-proxy-cidrs for safe X-Forwarded-For handling, optional -http-ratelimit-rps (per-IP HTTP throttle before auth), SIGHUP policy reload, allowlist user_agent_prefixes, alert dedupe and rate limits, traceparent → trace_id on alerts, Prometheus histograms and extra counters, and OpenAPI served at GET /openapi.yaml (source: cmd/spectral-edge/openapi.yaml).
Policy + mesh: docs/POLICY_MESH_AND_EDGE.md. Edge production runbook: docs/EDGE_PRODUCTION.md.
Reviewers: docs/EDGE_THREAT_MODEL.md, docs/EDGE_VS_MESH.md, docs/EDGE_CAPACITY.md.
Full flag list and operations: docs/EDGE.md.
Tests: go test ./cmd/spectral-edge/... -short (userspace only; no BPF). Includes reverse-proxy integration tests against httptest upstreams, BenchmarkEdgeProxyThroughput, testutil checks on Prometheus counters (dedupe, allowlist, rate limits), custom scan-path, API key auth, /metrics, and shadow-mode cases (block rules alert but default -reject-on-block-rule still forwards traffic—see shadow_edge_test.go). The repository root package spectral-mesh also has tests that require generated BPF artifacts—run make generate (or make generate-docker) before go test ./... if those files are missing. Full map: tests/README.md.
| Path | Role |
|---|---|
main_linux.go |
Linux main: eBPF load, /proc discovery, uprobe attach, ringbuf reader, SIGHUP + syncBlockBPF, graceful shutdown |
cli.go |
CLI flags and helpers for spectral-mesh |
discover_linux.go |
Linux-only discoverAndAttach / procRootFromEnv |
capture_bridge.go |
Optional ingest HTTP server and hookwire listener when -capture-ingest-addr / -capture-hook-socket are set |
bpf_load.go |
Linux: BPF load with configurable ring buffer map size |
block_bpf.go |
Linux: sync block patterns / block_enabled maps from policy |
metrics.go |
Linux: Prometheus registration + BPF map-backed gauges |
spectral_bpf_generate.go |
//go:generate bpf2go directives (OS-agnostic file so go generate works everywhere) |
version.go |
Version, GitCommit, BuildTime placeholders (make sets via -ldflags) |
spectral_elf_linux_test.go |
Linux: verifies embedded BPF ELF parses (no kernel load) |
cmd/spectral-edge/ |
HTTP edge inspector: same policy + policy_alert on request bodies; optional -upstream reverse proxy (docs/EDGE.md) |
cmd/spectral-edge/openapi.yaml |
OpenAPI 3 source for GET /openapi.yaml on a running spectral-edge |
cmd/spectral-capture-demo/ |
Demo client: split-chunk POST /v1/ingest/chunk (used by scripts/simulate_capture_demo.sh) |
internal/sensorcore/ |
Shared alert limiter + Processor (HandleChunk): policy scan path used by the mesh and spectral-edge |
internal/policy/ |
v2 rules + legacy patterns, LoadDocument, validation, BPF block helpers |
internal/policyengine/ |
Shared Aho–Corasick engine + hot-reload holder; used by host sensor and spectral-edge |
internal/allowlist/ |
Optional allowlist JSON (comm_prefixes for mesh, user_agent_prefixes for edge) |
internal/dedupe/ |
Time-window dedupe for alerts |
internal/rollbuf/ |
Per-PID rolling buffer, OS-specific PID listing for prune, match/redact helpers |
internal/capture/ingest/ |
HTTP /v1/ingest/chunk handler for -capture-ingest-addr |
internal/capture/hookwire/ |
Binary frame codec for -capture-hook-socket |
internal/k8sresolve/ |
Linux: cgroup + informer wiring for -k8s-enrich (k8s_* fields on alerts) |
bpf/scrubber.c |
BPF: SSL_write uprobe, ringbuf event, optional block match + user buffer wipe |
bpf/user_pt_regs_arm64.h |
ARM64 pt_regs typedefs included from scrubber.c |
bpf/vmlinux.h |
Generated from kernel BTF (bpftool); not committed until you run make generate on a machine with /sys/kernel/btf/vmlinux |
spectral_*_bpfel.* |
Generated by go generate / bpf2go (do not edit) |
go.mod, go.sum |
Go module metadata |
docs/RUNBOOK.md |
Operations: health, reload, rate limits |
docs/DEMO_RUNBOOK.md |
Sales / live demo script (spectral-edge + optional Grafana / spectral-mesh) |
docs/EDGE.md |
Network edge deployment (spectral-edge) |
docs/POLICY_MESH_AND_EDGE.md |
One policy file for spectral-mesh and spectral-edge |
docs/EDGE_PRODUCTION.md |
Production patterns: auth, trusted proxies, HTTP rate limits |
docs/EDGE_THREAT_MODEL.md |
Threat model and trust boundaries (spectral-edge) |
docs/EDGE_VS_MESH.md |
Comparison: edge vs host sensor |
docs/EDGE_CAPACITY.md |
Capacity / load notes and benchmarks |
docs/PACKAGING.md |
systemd and install notes; container-focused packaging |
docs/DATA_HANDLING.md |
What appears in logs and metrics (privacy reviews) |
Dockerfile |
UBI-based spectral-mesh image (eBPF; not spectral-edge) |
Dockerfile.edge |
Distroless spectral-edge image (not spectral-mesh) |
.dockerignore |
Smaller Dockerfile.edge build context (.git, dist/) |
Makefile |
generate, generate-docker, build, edge, edge-docker (Dockerfile.edge), test, ci (Linux mesh + tools) |
packaging/systemd/ |
Example systemd unit |
policy.example.json, allowlist.example.json |
Example policy and allowlist JSON |
monitoring/ |
Docker Compose: Prometheus + Grafana + provisioned Spectral mesh / Spectral edge dashboards (monitoring/README.md) |
monitoring/prometheus/spectral-mesh-rules.yml |
Example Prometheus alert rules for spectral-mesh |
monitoring/prometheus/spectral-edge-rules.yml |
Example Prometheus alert rules for spectral-edge |
scripts/build_spectral_edge_release.sh |
Local build of spectral-edge release tarballs |
scripts/load_edge_smoke.sh |
Load test against edge (hey or parallel curl) |
scripts/simulate_mesh.sh |
Outbound HTTPS simulation for spectral-mesh (curl / OpenSSL, Linux eBPF) |
scripts/simulate_mesh_ingest.sh |
POST /v1/ingest/chunk simulation (any OS; needs -capture-ingest-addr) |
scripts/simulate_mesh_grafana.sh |
Ingest burst (+ optional split chunks) for Grafana mesh dashboard |
scripts/simulate_capture_demo.sh |
Wrapper for go run ./cmd/spectral-capture-demo |
scripts/simulate_edge.sh |
HTTP POST simulation for spectral-edge |
scripts/simulate_edge_scan.sh |
POST /v1/scan only (handler latency in Grafana) |
scripts/simulate_edge_grafana.sh |
Mixed traffic for Grafana spectral_edge_* metrics |
scripts/flood_https.sh |
HTTPS curl flood without API key (general testing) |
scripts/flood_openai.sh |
Optional OpenAI API flood (curl); requires API key |
tests/load/ |
Benchmarks for append+match and large prune (no eBPF) |
tests/fleet/ |
Simulations: many PIDs, proc churn, stress prune |
tests/README.md |
Where tests live and how to run subsets without BPF |
The daemon needs root/CAP_BPF and a suitable kernel to run; automated tests focus on userspace logic that does not load BPF.
spectral-edge (cmd/spectral-edge/) has dedicated tests (mux, auth, client IP, /v1/scan, proxy integration, shadow vs reject-on-block behavior, rate limits, body codec). go test ./... at the repo root compiles the spectral-mesh package for the current GOOS and needs go generate output (spectral_*_bpfel.*) so Linux tests can parse embedded BPF. If you only work on edge, run go test ./cmd/spectral-edge/... ./internal/... -short without generating BPF.
# Same checks as CI (gofmt, vet, test, go generate, CGO build)
make ci
# Unit tests (policy, rollbuf, fleet simulations) — verbose lines per test
make test
# Throughput and allocation profiles for matcher + rolling buffer (~few seconds)
make bench
# Include the heavier prune stress test (~15k ghost PIDs)
go test ./tests/fleet/...Plain go test ./... only prints package names when everything passes; use make test (adds -v) or go test -v ./... if you want each test name on screen.
CI (.github/workflows/ci.yml): on each push/PR to main or master, linux/amd64 and linux/arm64 jobs each install libbpf, clang, and llvm, then run gofmt, go vet, go test -short, go generate ./... (bpf2go for both architectures), CGO_ENABLED=1 go build (spectral-mesh for Linux), and CGO_ENABLED=0 go build (spectral-edge). Concurrency: newer runs cancel older ones on the same branch.
Releases (spectral-edge): pushing a v* tag runs .github/workflows/release-edge.yml — static tarballs (linux/darwin × amd64/arm64), checksums, go version -m output, GitHub Release assets, and a GHCR image ghcr.io/<owner>/<repo>/spectral-edge:<tag> (see docs/EDGE.md).
Prometheus + Grafana: see monitoring/README.md for Docker Compose (Prometheus on host :9091 scrapes spectral-mesh on :9090 and spectral-edge on :8080 by default; Grafana on :3000 with provisioned Spectral mesh / Spectral edge dashboards). Example edge alert rules: monitoring/prometheus/spectral-edge-rules.yml (merge into your Prometheus config as needed; not mounted by default in Compose).
Real end-to-end proof still requires running spectral-mesh on a host or VM with TLS workload generators; repository tests approximate fleet-scale PID churn and sustained chunk processing on the Go side only. On linux/amd64 and linux/arm64, TestSpectralEmbeddedCollectionSpec verifies the embedded BPF ELF parses (no kernel load).
scripts/flood_https.sh runs parallel curl requests to public HTTPS sites (default GET https://example.com, or POST JSON to https://httpbin.org/post). No API key — useful for general TLS/SSL_write load and metrics checks. Optional TRIGGER_SAMPLE_POLICY=1 with MODE=post puts Project Ethos in the JSON body so built-in policy may emit policy_alert (smoke test).
sudo ./spectral-mesh -metrics-addr :9090 # terminal 1
./scripts/flood_https.sh # terminal 2 — default GET flood
MODE=post COUNT=30 ./scripts/flood_https.sh
MODE=post TRIGGER_SAMPLE_POLICY=1 COUNT=10 ./scripts/flood_https.shIf curl: (60) SSL certificate problem: install OS CA data (sudo dnf install -y ca-certificates on Fedora/RHEL), or set CURL_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt, or for lab-only CURL_INSECURE=1 (same as curl -k).
scripts/flood_openai.sh runs many parallel curl calls to OpenAI’s Chat Completions endpoint (api.openai.com), which drives real SSL_write traffic through OpenSSL (good for exercising the sensor and metrics). This uses your API key and costs money; you may hit HTTP 429 if CONCURRENCY is too high.
# Terminal 1 — sensor + metrics
sudo ./spectral-mesh -metrics-addr :9090
# Terminal 2 — flood (default 50 requests, concurrency 5)
export OPENAI_API_KEY=sk-...
./scripts/flood_openai.sh
# Or: put the key in a file (first line only), then:
# OPENAI_API_KEY_FILE="${HOME}/.openai_api_key" ./scripts/flood_openai.sh
# Heavier load
COUNT=200 CONCURRENCY=15 ./scripts/flood_openai.shTune MODEL and MAX_TOKENS (see script header) to balance cost vs. response size.
| Script | Purpose |
|---|---|
scripts/simulate_mesh.sh |
Outbound HTTPS via curl (OpenSSL) so spectral-mesh can observe SSL_write on Linux. Optional TRIGGER_POLICY_MATCH=1 includes Project Ethos in the POST body for built-in policy smoke tests. |
scripts/simulate_mesh_ingest.sh |
POST /v1/ingest/chunk with base64 payloads when mesh runs with -capture-ingest-addr. Optional SPLIT_CHUNK=1 splits Project Eth / os across two posts (rolling-buffer demo). |
scripts/simulate_mesh_grafana.sh |
Calls simulate_mesh_ingest.sh with higher counts + optional split-chunk burst for monitoring/ Grafana; optional LINUX_EBPF_SIM=1 on Linux also runs a short simulate_mesh.sh. |
scripts/simulate_capture_demo.sh |
Runs go run ./cmd/spectral-capture-demo (split chunks via ingest API). |
scripts/simulate_edge.sh |
HTTP POSTs to spectral-edge (same policy / policy_alert shape as a TLS-terminated path). Requires spectral-edge listening (e.g. make edge && ./spectral-edge -listen 127.0.0.1:8080). |
scripts/simulate_edge_scan.sh |
POST /v1/scan only — moves spectral_edge_http_request_duration_seconds{handler="scan"} without incrementing spectral_edge_http_requests_total. |
scripts/load_edge_smoke.sh |
Load against spectral-edge (defaults to /v1/scan); uses hey if installed, else parallel curl (xargs -P, same idea as simulate_edge.sh). |
scripts/simulate_edge_grafana.sh |
Proxy + /v1/scan traffic so Grafana “Spectral edge” panels (throughput, alerts, handler latency) move; see script header. |
scripts/build_spectral_edge_release.sh |
Build static release tarballs (same artifacts as release-edge.yml). |
scripts/flood_https.sh |
Higher-volume HTTPS flood (see HTTPS flood script). |
scripts/flood_openai.sh |
OpenAI API traffic (see OpenAI flood script). |
Package-level tests live next to code (internal/*, cmd/spectral-edge, root *_test.go). Heavier simulations sit under tests/. For a file-by-file map of spectral-edge tests, shadow-mode coverage, make test vs CI, and BPF prerequisites, see tests/README.md.
Built as a founder-side project for kernel-adjacent security experiments.