SwarmZilla 3000 Collaborative Project
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
001.svg
002.svg
003.svg
004.svg
005.svg
006.svg
NODE_PREPARATION.md
README.md
SCRIPTS.md
dashboard2.png
events.png
monitoring.md

README.md

Swarm3k - Friday, 28th October 2016 3.00PM UTC

SwarmZilla 3000 Collaborative Project

We now have 3,195 nodes and counting !!

Special Thanks

DigitalOcean

DigitalOcean gave us $250 credits to provision nodes for this project.

Sematext

Sematext joins us as our official monitoring and logging system.

Demonware

Demonware joins us with their 1,000 nodes.

Contribution Proposal

  1. Please create a pull request, saying you'd like to contribute nodes. Please also include your full name, Twitter's handle and your company name.
  2. Date for the experiments will be announced after we get more than 3,000 nodes.
  3. Please note that the node's specification for this run is 1GB of RAM with 1 vCore. We're sorry that 512MB will be not enough for our testing this time.

What's mininum requirements of a node?

  1. Minimum hardware spec is 1GB of RAM and 1 vCore
  2. For the security reason, we do not provision or operate your nodes directly. A cool engineer on your side will be responsible for provision and join the cluster using the provided token
  3. Token will be provided in this room: gitter.im/swarmzilla/swarm3k during the run.
  4. Last time, we ran the Swarm2K cluster around 8 hours. Basically, you can expect this run of Swarm3K will be around 8 hours too.
  5. Each node must have a public IPv4 on its network interface. Floating IPs cannot be used to join this public cluster.
  6. Docker 1.12.3
  7. TCP port 2377 for cluster management
  8. TCP and UDP port 7946 for communication among nodes
  9. TCP and UDP port 4789 for overlay network
Name Company Number of Nodes
Expected to Contribute
@chanwit Suranaree University 100
@FlorianHeigl my own boss 10
@jmaitrehenry PetalMD 50
@everett_toews Rackspace 100
@InetCloudArch Internet Thailand 500
@squeaky_pl n/a 10
@neverlock Neverlock 10
@demonware Demonware 1000
@sujaypillai Jabil 50
@pilgrimstack OVH 500
@ajeetsraina Collabnix 10
@AorJoa Aiyara Cluster 10
@f_soppelsa Personal 20
@GroupSprint3r SPRINT3r 630
@toughIQ Personal 30
@mrnonaki N/A 10
@zinuzoid HotelQuickly 200
@_EthanHunt_ N/A 25
@packethost Packet 100
@ContainerizeT ContainerizeThis: The Conference 10
@_pascalandy FirePress 10
@lucjuggery TRAXxs 10
@alexellisuk Personal 10
@svega Huli 10
@BretFisher Myself :) 20
@voodootikigod Emerging Technology Advisors 100
@AlexPostID Personal 20
@gianarb ThumpFlow 50
@Rucknar Personal 10
@lherrerabenitez Personal 10
@abhisak Nipa Technology 100
@djalal NexwayGroup 30

Beginner's Guide

If you're an individual and it's your first time joining SwarmZilla, we encourage you to not contribute more than 50 nodes. The provision steps for number of nodes more than 50 will make things complex.

If you're joining us for the second time (welcome back Heroes!!), feel free to contribute any number of nodes.

Goals

This is the 2nd collaborative project to distributedly form a huge Docker cluster, targeting 3,000 nodes.

We understand Docker and we also understand that the community needs long-term versions of Docker. This test will be done on the stable 1.12.3 version of Docker to get feedbacks and of course to make the next stable versions being more stable!!

  • Networking; We will be creating a large subnet /20 and trying to assign, as many as possible, IP addresses to each container on each node distributedly. We expect to have around ~3,000 IP addresses assigned and the workload application should be working fine.
  • Routing Mesh; We will be testing the Routing Mesh feature on Docker 1.12.3.
  • For the routing mesh tests, the workload will be Wordpress applications. We're designing this.

Public results

All experimental results will be provided publicly for all of you to analyze, write blogs, or even used as information for further development of your own commercial projects. Please feel free to use it. If you'd like to refer to the set of published data, just link back to this project page.