Lightweight HTTP message queue using EventSource with file persistence.
Ideas based on: https://plus.google.com/103489630517643950426/posts/RFhSAGMP4Em and implemented in Scala. There's another similar and active project in C++ called ssehub.
To enable low friction event sourcing.
What it does
Receives events via POST, retains them and also publishes them to all current subscribers.
See/comment on the Google Docs diagram here
What to use it for
Event Sourcing. You push events and your clients can read either input or their own output when restarted. Then you don't need a complex relational database and all your state is reproducibly stored in memory. You can rebuild that state with
You get both batch and reactive mode by having access to a finite historical file and an infinite update stream.
How to use it
- eventsource for Node.js.
- Alpakka's Server-sent Events Connector.
- If other good libraries with recovery exist, make a PR.
- Or just plain curl if you want to access your data directly, as shown below.
(typical) Event Sourcing usage
event stream = historical events + live events
You can also load the historical data in batch for performance reasons but that is out of scope of this document.
Run the server
$ docker pull scalawilliam/eventsource-hub $ mkdir -p events $ docker run -v $(PWD)/events:/opt/docker/events -p 9000:9000 scalawilliam/eventsource-hub
Query infinite stream of new events
Channel names must match pattern
Each distinct channel name maps to a different channel, and each is backed by a different storage file
$ curl -H 'Accept: text/event-stream' -i http://localhost:9000/a-channel HTTP/1.1 200 OK Content-type: text/event-stream event: <event> id: <id> data: <data> ...
Post a new event
$ echo Some data | curl -d @- http://localhost:9000/a-channel?event=type HTTP/1.1 201 Created ... 2017-04-05T11:40:10.793Z
Which will yield an entry in the stream such as:
event: type id: 2017-04-05T11:40:10.793Z data: Some data
The query parameter
event is optional.
ID by default is the current ISO 8601 timestamp in UTC. You may override it by specifying the
id query parameter. It will be used as the
Last-Event-ID index. We support resuming based on the last event as per specification. ID is not necessarily unique.
Behaviour is undefined if you send data in binary. Behaviour is also undefined if you send more than a multi-line request body.
Events are ordered, and the application has a single writer per storage file.
Retrieve past events
$ curl -i http://localhost:9000/a-channel HTTP/1.1 200 OK Content-type: text/tab-separated-values <id><TAB><event><TAB><data> <id><TAB><event><TAB><data> ... <id><TAB><event><TAB><data>
You can push NDJSON, TSV, CSV or any other plain text.
This is especially perfect for time series data.
As of dda4a0c, if we run:
$ echo $(seq 1 200) > sample-file.txt $ ab -k -n 20000000 -c 100 -p sample-file.txt http://localhost:9000/target
And then start a listener:
$ curl -s -H 'Accept: text/event-stream' -i http://localhost:9000/stuff1 |grep data | pv -l -a > /dev/null [2.02k/s]
Which is 2000 events/second. This can and will be improved. Yet this is more than sufficient for many systems.
I chose this stack because of my experience and familiarity with it.
- Scala and Play framework because I'm experienced in it. See ActionFPS and Git Watch which also use Event Source.
- Build tool: SBT default for Play and supports Docker.
- Docker lets us easily distribute the application.
- Travis CI for automated build & publishing of Docker artifacts because it's de facto standard for open source projects.
- This repository is based on https://github.com/ScalaWilliam/play-docker-hub-example.
- I'm taking something akin to Bug Driven Development. Write a specification first & then create bugs to make the code meet it.
To regenerate TOC we use markdown-toc:
$ npm install -g markdown-toc $ markdown-toc -i README.md
- Copyright 2017 Apt Elements
- I think GNU AGPLv3 is what I would like but it's not clear to me. I would like:
- Any code modifications to be open sourced and preferably integrated.
- Application to be usable as part of say a CloudFormation stack without having to reveal anything outside of the application.
- Application to be usable commercially without restriction.