Data streams management using RabbitMQ. OF ARCHAEOLOGICAL INTEREST ONLY
Python JavaScript Java Scala Erlang Shell Other
Switch branches/tags
Nothing to show
Latest commit c5831d5 Jun 14, 2013 @squaremo squaremo Update README.md
Somewhat overstated project status.
Permalink
Failed to load latest commit information.
bin Made create_help --help better (working even!) Sep 15, 2009
doc Brought getting_started_dev.org somewhat up-to-date wrt. plugins (and… Sep 14, 2009
etc Started work on an init.d script, based on the ubuntu template and ra… Sep 7, 2009
examples Partially tested email configuration Jul 31, 2009
harness Merge backout changeset Sep 28, 2009
orchestrator Switch debug and trace on for when running the orchestrator from the … Sep 25, 2009
packaging/RPMS/Fedora Supply the username when creating the streams user Sep 28, 2009
plugins Removing window plugin for the time being (in its current state it's … Sep 15, 2009
sbin Merged from bug21474 try 2 Aug 25, 2009
share Transform script for use in xslt integration test Jul 23, 2009
sites Cleanup - remove IDE generated headers. Sep 8, 2009
test Make sure we have XML messages for use with with xpathselect, for exa… Sep 25, 2009
.hgignore Merge bug21474; its still not 100% finished but shouldn't conflict. Aug 25, 2009
COPYING License notice Mar 25, 2010
LICENSE-MPL-rabbitmq-streams License notice Mar 25, 2010
Makefile Added python-mako as dependency to Makfile for bin/create_plugin Sep 14, 2009
Makefile.install Horrible hack to make rpm work, maybe. Aug 25, 2009
Makefile.test Chart the trace data, fix up other charts Sep 25, 2009
README.PACKAGING Removed obsolete and non-fedora specifc stuff from README.PACKAGING. Aug 28, 2009
README.md Update README.md Jun 13, 2013
start-feedshub-rabbit.sh Use proper {} instead of () for sh var expansion May 9, 2009

README.md

What is this?

"RabbitMQ Streams" is our name for the open source project developed with the BBC to power the "BBC Feeds Hub". It is a data streams management system.

For background information, see

You can think of RabbitMQ Streams as a distributed, robust, scalable, secure, user-friendly and manageable version of Unix pipes.

The basic logical building blocks are Sources and Destinations of data and Pipelines. The latter are composed of PipelineComponents which can route (e.g. based on regexp matches on Atom feed entries), merge and transform the data in arbitrary ways.

Data arrive at sources, and leave from destinations, via Gateways, which talk various protocols to the outside world.

Gateways as well as pipeline components (jointly referred to as Plugins) can currently be written in Java and Python, and require little boilerplate (see e.g. regexp_replace.py Support for other languages can be added straightforwardly by creating a Harness; plugins are essentially just programs following a simple protocol, with the harness taking care of much of the detail.

The message wiring and plugin processes are managed by an Erlang/OTP application called the Orchestrator. This is in a sense the core of Streams.

Current state

No longer being developed. May be interesting to look at though.

Getting started

The Makefile has a number of targets useful for development.

make setup installs build dependencies and sets up a development environment. This is geared towards apt-get, but it's fairly easy to do the equivalent for e.g., macports.

We also need RabbitMQ and CouchDB; make setup builds these from source in build/opt; make install-dev-debs will just install build dependencies without building RabbitMQ and CouchDB.

make all builds the things . This currently relies on a Maven repository for the Java harness and plugins, which we'll make available.

make create-fresh-accounts will install a minimal configuration, and create a user in RabbitMQ for Streams to use.

Descriptions of the model items (sources, gateways, etc.) are kept in CouchDB. sbin/import_config.py DIR can be used to import whole configurations at once; e.g., sbin/import_config.py examples/showandtell_demo.

make run will start RabbitMQ and CouchDB from the local builds, start the development code, and tail the logs for you.

make listen-orchestrator start-orchestrator-nox starts just the Streams orchestrator and tail its log in an xterm.

There is a more detailed "Getting started" guide in doc/getting_started_dev.org.