EventSource messaging middleware with file persistence. Alpha
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.

EventSource Hub Build Status

Lightweight HTTP message queue using EventSource with file persistence.

Ideas based on: and implemented in Scala. There's another similar and active project in C++ called ssehub.


To enable low friction event sourcing.

What it does

Receives events via POST, retains them and also publishes them to all current subscribers.

See/comment on the Google Docs diagram here

What to use it for

Event Sourcing. You push events and your clients can read either input or their own output when restarted. Then you don't need a complex relational database and all your state is reproducibly stored in memory. You can rebuild that state with scan or scanLeft.

You get both batch and reactive mode by having access to a finite historical file and an infinite update stream.

How to use it

Client libraries

(typical) Event Sourcing usage

event stream = historical events + live events

You can also load the historical data in batch for performance reasons but that is out of scope of this document.

Run the server

To run with Docker and a mounted volume in events directory (files are .tsv):

$ docker pull scalawilliam/eventsource-hub
$ mkdir -p events
$ docker run -v $(PWD)/events:/opt/docker/events -p 9000:9000 scalawilliam/eventsource-hub

Query infinite stream of new events

Channel names must match pattern [A-Za-z0-9_-]{4,64}.

Each distinct channel name maps to a different channel, and each is backed by a different storage file events/<name>.tsv.

$ curl -H 'Accept: text/event-stream' -i http://localhost:9000/a-channel
HTTP/1.1 200 OK
Content-type: text/event-stream

event: <event>
id: <id>
data: <data>

Post a new event

$ echo Some data | curl -d @- http://localhost:9000/a-channel?event=type
HTTP/1.1 201 Created


Which will yield an entry in the stream such as:

event: type
id: 2017-04-05T11:40:10.793Z
data: Some data

The query parameter event is optional.

ID by default is the current ISO 8601 timestamp in UTC. You may override it by specifying the id query parameter. It will be used as the Last-Event-ID index. We support resuming based on the last event as per specification. ID is not necessarily unique.

Behaviour is undefined if you send data in binary. Behaviour is also undefined if you send more than a multi-line request body.

Events are ordered, and the application has a single writer per storage file.

Retrieve past events

$ curl -i http://localhost:9000/a-channel
HTTP/1.1 200 OK
Content-type: text/tab-separated-values


Event payload

You can push NDJSON, TSV, CSV or any other plain text.

This is especially perfect for time series data.

Access control

Not in scope. You can use an API gateway or nginx for that.

There's the nginx_http_http_auth_request module which you can use with Play framework. Example in pure nginx.


As of dda4a0c, if we run:

$ echo $(seq 1 200) > sample-file.txt
$ ab -k -n 20000000 -c 100 -p sample-file.txt http://localhost:9000/target

And then start a listener:

$ curl -s -H 'Accept: text/event-stream' -i http://localhost:9000/stuff1 |grep data | pv -l -a > /dev/null

Which is 2000 events/second. This can and will be improved. Yet this is more than sufficient for many systems.

Technical choices

I chose this stack because of my experience and familiarity with it.

Other notes

To regenerate TOC we use markdown-toc:

$ npm install -g markdown-toc
$ markdown-toc -i


  • Copyright 2017 Apt Elements
  • I think GNU AGPLv3 is what I would like but it's not clear to me. I would like:
    • Any code modifications to be open sourced and preferably integrated.
    • Application to be usable as part of say a CloudFormation stack without having to reveal anything outside of the application.
    • Application to be usable commercially without restriction.