Skip to content

a-h/ansible-mongodb-cluster

 
 

Repository files navigation

##Deploying a MongoDB cluster with Ansible

  • Tested with Ansible 1.9.4
  • Expects CentOS/RHEL 7 hosts

Data Replication

Alt text

Data backup is achieved in MongoDB via replica sets. As the figure above shows, a single replication set consists of a replication master (active) and several other replications slaves (passive). All the database operations like add/delete/update happen on the replication master and the master replicates the data to the slave nodes. mongod is the process which is responsible for all the database activities as well as replication processes. The minimum recommended number of slave servers are 3.

Deploying MongoDB Ansible

Prerequisites

Edit the group_vars/all file to reflect the below variables.

  • Use the provided Vagrant file with VirtualBox to create servers to host the cluster and edit your hosts file to include your new servers, e.g.:

      10.0.0.101      mongo1
      10.0.0.102      mongo2
      10.0.0.103      mongo3
      10.0.0.104      mongo4
      10.0.0.105      mongo4
    
  • If you decide to use some other virtual machines, update the name of the ethernet adaptor (iface variable) in the /group_vars/all file and ensure that ports 22 and 27017 are accessible.

    enp0s8 # the interface to be used for all communication.

  • The default directory for storing data is /data, please change it if required. Make sure it has sufficient space: 10G is recommended.

  • The secret file at /roles/mongod_replicaset/secret should be replaced with a custom secret, generated by the following command. See documentation at https://docs.mongodb.org/manual/tutorial/generate-key-file/

    openssl rand -base64 741 > secret

Deployment Example

The inventory file looks as follows:

[mongo_servers]
mongo1 mongod_port=27017
mongo2 mongod_port=27017
mongo3 mongod_port=27017
mongo4 mongod_port=27017
mongo5 mongod_port=27017

[mongod_primary]
mongo1 mongod_port=27017

[mongod_slaves]
mongo2 mongod_port=27017
mongo3 mongod_port=27017
mongo4 mongod_port=27017

[mongod_arbiters]
mongo5

Build the site with the following command:

ansible-playbook -i hosts 01_create_cluster.yml -u root -k

Verifying the Deployment

Once configuration and deployment has completed we can check replication set availability by connecting to individual primary replication set nodes:

    mongo --host mongo1 --port 27017

When connected, issue the following commands to query the status of the replication set and you should get a similar output.

mongo_replication:PRIMARY> use admin
switched to db admin
mongo_replication:PRIMARY> db.auth("admin", "123456")
1
mongo_replication:PRIMARY> rs.status()
{
	"set" : "mongo_replication",
	"date" : ISODate("2015-10-21T19:43:54.932Z"),
	"myState" : 1,
	"members" : [
		{
			"_id" : 0,
			"name" : "mongo1:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 100,
			"optime" : Timestamp(1445456545, 1),
			"optimeDate" : ISODate("2015-10-21T19:42:25Z"),
			"electionTime" : Timestamp(1445456537, 2),
			"electionDate" : ISODate("2015-10-21T19:42:17Z"),
			"configVersion" : 5,
			"self" : true
		},
		{
			"_id" : 1,
			"name" : "mongo2:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 96,
			"optime" : Timestamp(1445456545, 1),
			"optimeDate" : ISODate("2015-10-21T19:42:25Z"),
			"lastHeartbeat" : ISODate("2015-10-21T19:43:53.716Z"),
			"lastHeartbeatRecv" : ISODate("2015-10-21T19:43:54.473Z"),
			"pingMs" : 1,
			"lastHeartbeatMessage" : "could not find member to sync from",
			"configVersion" : 5
		},
		{
			"_id" : 2,
			"name" : "mongo3:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 94,
			"optime" : Timestamp(1445456545, 1),
			"optimeDate" : ISODate("2015-10-21T19:42:25Z"),
			"lastHeartbeat" : ISODate("2015-10-21T19:43:53.732Z"),
			"lastHeartbeatRecv" : ISODate("2015-10-21T19:43:54.906Z"),
			"pingMs" : 2,
			"lastHeartbeatMessage" : "could not find member to sync from",
			"configVersion" : 5
		},
		{
			"_id" : 3,
			"name" : "mongo4:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 92,
			"optime" : Timestamp(1445456545, 1),
			"optimeDate" : ISODate("2015-10-21T19:42:25Z"),
			"lastHeartbeat" : ISODate("2015-10-21T19:43:53.732Z"),
			"lastHeartbeatRecv" : ISODate("2015-10-21T19:43:53.184Z"),
			"pingMs" : 3,
			"syncingTo" : "mongo3:27017",
			"configVersion" : 5
		},
		{
			"_id" : 4,
			"name" : "mongo5:27017",
			"health" : 1,
			"state" : 7,
			"stateStr" : "ARBITER",
			"uptime" : 89,
			"lastHeartbeat" : ISODate("2015-10-21T19:43:53.732Z"),
			"lastHeartbeatRecv" : ISODate("2015-10-21T19:43:53.840Z"),
			"pingMs" : 0,
			"configVersion" : 5
		}
	],
	"ok" : 1
}

Scaling the Cluster


To add a new node to the existing MongoDB Cluster, modify the inventory file (/hosts) to add a new server into the mongo_servers section and either the mongod_slaves or mongod_arbiters section.

ansible-playbook -i hosts 02_update_cluster.yml -u root -k

A new server can be created by updating the Vagrant file, and calling "vagrant up", remember to update your /etc/hosts file to include the new server.

###Verification

The newly added node can be easily verified by checking the replication status and seeing the data being copied to the newly added node.

###Serverspec

Verify the servers using serverspec with ansible_spec

  $gem install ansible_spec
  $rake T
  rake serverspec:common
  rake serverspec:mongod
  $rake serverspec:mongod

About

Creating a MongoDB cluster using Ansible. Based on Ansible team examples, but without the additional complexity of sharding.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Ruby 97.1%
  • Shell 2.9%