A while ago I decided to build a small cluster of Raspberry Pi boards. I've since upgraded to Pi 2 boards, and this repository is used for versioning design notes, configuration files and sundry.
I wanted something challenging to do in terms of distributed processing, and lacked dedicated hardware to do it. There's a lot to be learned even from simple, unsophisticated solutions, and virtual machines can only do so much.
The cluster consists of five nodes: a master and four slaves. The master acts as a gateway, DHCP and NFS server and the slave nodes get their IP address and
/srv/jobs directory from it.
All slave nodes are identical -- completely identical, except for hostname and MAC address, and there is no need to log in and configure things manually for each node.
Here's a few more shots of the original version, with the 5-port PSU and the old Model B boards:
In retrospect I probably ought to have gone for longer USB cables and moved all of the cabling to the USB side (since it leaves the SD card slot clear), but I also need to be able to see the activity lights, and the Pi isn't exactly designed for easy stacking.
A larger cluster is certainly feasible, but 5 boards is as much as I can power with the PSU I have.
This is a partial list of the stuff I'm using (Amazon UK affiliate links):
- 5x Raspberry Pi 2 Model B, which replaced the Raspberry Pi Model B boards (duh!)
- 7x Class 10 Micro SD Cards (2 master cards, 5 for production), which replaced the Class 10 SD Cards
- 1x TP-Link 5-port Ethernet switch and some ancient cables I had lying around (need to build new stripped down ones)
- 5x 6 inch micro USB cables
- 1x 5 port USB PSU
- 1x ancient Bondi Blue iMac USB keyboard.
- 1x Custom-printed rack case (see SCAD files)
As a base OS, I'm currently using the Ubuntu 16.04 official image for the Pi 2, which works much better than Raspbian for my purposes (nevertheless, the configuration files in this repo should work in both systems)
It's a bit ironic to do some kinds of processing on merely 5GB of aggregated RAM, but I'm interested in the algorithms themselves and don't plan on doing something silly like tackling the next Netflix Prize with this -- besides, running things on low-end hardware is often the only way to do proper optimization.
List of packages involved so far:
- etcd, which I'm now using to store (and distribute) configurations across nodes
- Docker, which ships with Ubuntu 14.04 and makes it a lot easier to build and tear down environments. Currently trying to get 1.7 to build so I can use
swarmand other niceties.
- OpenVSwitch, which I'm using for playing around with network topologies
- Jupyter, which provides me with a nice web front-end and basic Python parallel computing.
- Spark, which has mostly replaced Disco for map/reduce jobs.
- Dash, a real-time status dashboard (rewritten in Go, available under the
dashboardfolder here, and still being worked on)
- A custom daemon that sends out a JSON-formatted multicast packet with system load, CPU usage and RAM statistics (written in raw C, available in
- ElasticSearch, which I'm using for storing metrics.
- Oracle JDK 8
- leiningen (which fetches Hazelcast and other dependencies for me, via this library)
- Nightcode as a development environment (LightTable doesn't run on ARM, and a lot of my hobby coding these days is actually done on an ODROID)
distccfor building binaries slightly faster
dnsmasqfor DHCP and DNS service
Here's what the cluster dashboard looks like:
Raspberry Pi slow?But isn't the
Well spotted, young person. It was, and the Pi 2, despite being a marked improvement, isn't exactly a supercomputer. But it's also cheap, and beggars can't be choosers.
Nevertheless, the current configuration provides me with 20 ARMv7 cores clocked at 1GHz, and that's nothing to sneeze at.
But I'm open to sponsoring so that I can upgrade this to have at least twice as many boards...