A distributed ephemeral messaging system built on the BEAM (Erlang Virtual Machine), written in Gleam.
Each message is a separate OTP Actor process with a TTL (time-to-live). When the TTL expires, the process dies and the data vanishes — no database, no persistence, no trace. Multiple instances discover each other via Distributed Erlang.
Instance A (port 4000) Instance B (port 4001)
┌─────────────────────┐ ┌─────────────────────┐
│ NodeRegistry │◄───────►│ NodeRegistry │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ Msg "Hello" │ │ │ │ Msg "World" │ │
│ │ TTL: 30s │ │ │ │ TTL: 60s │ │
│ └─────────────┘ │ │ └─────────────┘ │
│ ┌─────────────┐ │ └─────────────────────┘
│ │ Msg "Bye" │ │
│ │ TTL: 5s ✗ │ │ ← Process dies, data gone
│ └─────────────┘ │
└─────────────────────┘
BEAM enables zero-downtime code upgrades via OTP's sys module callbacks.
-
code_change/3callback — when a new module version is loaded, OTP callscode_change(OldVsn, State, Extra)on the running actor. You transform the old state to the new state format. -
Two-version module coexistence — BEAM keeps two versions of each module in memory simultaneously (old + new). Processes already executing in the old module continue safely; new calls start using the new code.
-
Gleam actors via
gleam_otp— Gleam's Actor abstraction sits on top of Erlang'sgen_server. Theon_messagehandler gets called by the OTP framework, which means all OTP system messages (including code upgrade signals) are handled transparently. -
Rolling upgrades — In a cluster, you can upgrade nodes one at a time. Distributed Erlang keeps them talking while you upgrade.
%% Erlang gen_server style (what gleam_otp wraps):
code_change(OldVsn, OldState, _Extra) ->
NewState = migrate_state(OldState),
{ok, NewState}.gleam buildThis:
- Parses
.gleamsource files - Type-checks everything
- Code-generates Erlang (
.erlfiles inbuild/dev/erlang/spectre_link/) - Compiles Erlang →
.beambytecode files - Bundles into an OTP application
The generated .beam files live in:
build/dev/erlang/spectre_link/ebin/
%% In Erlang shell on node A:
(spectre_link_4000@localhost)> net_adm:ping('spectre_link_4001@localhost').
pong %% ← nodes are now connectedOr via erl_connect from the OS shell:
erl -name admin@localhost -setcookie secret -eval \
"net_adm:ping('spectre_link_4000@localhost'), init:stop()."Distributed Erlang uses epmd to find processes. It starts automatically when a node goes distributed. Check active nodes:
epmd -names./run.sh
# Opens dashboard at http://localhost:4000# Terminal 1
./run.sh
# Terminal 2
./run_peer.sh
# Auto-discovers and connects to port 4000 node
# Opens dashboard at http://localhost:4001| Method | Path | Description |
|---|---|---|
| GET | / |
Dashboard |
| GET | /api/messages |
List live messages |
| POST | /api/messages |
Create message {"content":"...", "ttl": 30000} |
| GET | /api/nodes |
List connected Erlang nodes |
spectre_link (main)
├── port_scanner - FFI: find free TCP port
├── node_registry - OTP Actor: tracks message actors
│ └── message_actor - OTP Actor: one per message, dies on TTL
├── mesh_discovery - FFI: distributed Erlang node management
└── web_server - mist HTTP server with live dashboard
- Gleam 1.16.0
- Erlang/OTP 25
- mist — pure Gleam HTTP server
- gleam_otp — OTP actors/supervisors
- gleam_erlang — Erlang interop
- gleam_json — JSON encoding/decoding