Skip to content
This repository has been archived by the owner on Feb 27, 2020. It is now read-only.

Commit

Permalink
Merge pull request #35 from Metaswitch/etcd
Browse files Browse the repository at this point in the history
[Reviewer: Andy] DO NOT MERGE - Docs for automatic clustering and config sharing
  • Loading branch information
eleanor-merry committed Jun 3, 2015
2 parents 02d88fa + 964f26f commit c5a528b
Show file tree
Hide file tree
Showing 24 changed files with 1,162 additions and 575 deletions.
17 changes: 17 additions & 0 deletions docs/Automatic_Clustering_Config_Sharing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Clearwater Automatic Clustering and Configuration Sharing

Clearwater has a feature that allows nodes in a deployment to automatically form the correct clusters and share configuration with each other. This makes deployments much easier to manage. For example:

* It is easy to add new nodes to an existing deployment. The new nodes will automatically join the correct clusters according to their node type, without any loss of service. The nodes will also learn the majority of their config from the nodes already in the deployment.
* Similarly, removing nodes from a deployment is straightforward. The leaving nodes will leave their clusters without impacting service.
* It makes it much easier to modify configuration that is shared across all nodes in the deployment.

This features uses [etcd](https://github.com/coreos/etcd) as a decentralized data store, a `clearwater-cluster-manager` service to handle automatic clustering, and a `clearwater-config-manager` to handle configuration sharing.

### Is my Deployment Using Automatic Clustering and Configuration Sharing?

To tell if your deployment is using this feature, log onto one of the nodes in your deployment and run `dpkg --list | grep clearwater-etcd`. If this does not give any output the feature is not in use.

### Migrating to Automatic Clustering and Configuration Sharing

Deployments that are not using the feature may be migrated so they start using it. To perform this migration, follow these [instructions](Migrating_To_etcd).
28 changes: 28 additions & 0 deletions docs/Backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,3 +210,31 @@ This will:
- Run through all the lines on ellis without an owner and make sure
there is no orphaned data in homestead and homer, i.e. deleting the
simservs, IFC and digest for those lines.

## Shared Configuration

In addition to the data stored in ellis, homer, homestead and memento, a Clearwater deployment also has shared configuration that is [automatically shared between nodes](Automatic_Clustering_Config_Sharing.md). This is stored in a distributed database, and mirrored to files on the disk of each node.

### Backing Up

To backup the shared configuration:

* If you are in the middle of [modifying shared config](Modifying_Clearwater_settings.md), complete the process to apply the config change to all nodes.
* Log onto one of the sprout nodes in the deployment.
* Copy the following files to somewhere else for safe keeping (e.g. another directory on the node, or another node entirely).

/etc/clearwater/shared_config
/etc/clearwater/bgcf.json
/etc/clearwater/enum.json
/etc/clearwater/s-cscf.json

### Restoring Configuration

To restore a previous backup, copy the four files listed above to `/etc/clearwater` on one of your sprout nodes. Then run the following commands on that node:

/usr/share/clearwater/clearwater-config-manager/scripts/upload_shared_config
/usr/share/clearwater/clearwater-config-manager/scripts/upload_bgcf_json
/usr/share/clearwater/clearwater-config-manager/scripts/upload_enum_json
/usr/share/clearwater/clearwater-config-manager/scripts/upload_scscf_json

Now log onto each node in turn and run `/usr/share/clearwater/clearwater-config-manager/scripts/apply_shared_config` to make the node download and act on the restored config.
12 changes: 2 additions & 10 deletions docs/CDF_Integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,26 +33,18 @@ Ralf implements the behavior specified in [RFC3588](http://www.ietf.org/rfc/rfc3

### Configuring the billing realm

To point Ralf at the billing DIAMETER realm, add the following line to `/etc/clearwater/config` on each Ralf node:
To point Ralf at the billing DIAMETER realm, add the following line to `/etc/clearwater/shared_config` and follow [this process](Modifying_Clearwater_settings) to apply the change

billing_realm=<DIAMETER billing realm>

Then restart Ralf to pick up the change:

sudo service ralf stop (allowing monit to restart Ralf)

### Selecting a specific CDF in the realm

_Note:_ Bono only has support for selecting CDF identities based of static configuration of a single identity. Other P-CSCFs may have support for load-balancing or enabling backup CDF identities.

If you have a CDF set up to receive Rf billing messages from your deployment, you will need to modify the `/etc/clearwater/config` file on your Bono node to contain the following line:
If you have a CDF set up to receive Rf billing messages from your deployment, you will need to modify the `/etc/clearwater/shared_config` file and follow [this process](Modifying_Clearwater_settings) to apply the change:

cdf_identity=<CDF DIAMETER Identity>

Once you have done this, run the following command to cause Bono to pick up the changes.

sudo service bono quiesce (allowing monit to restart Bono)

## Restrictions

The very first release of Ralf, from the Counter-Strike release of Project Clearwater, does not generate Rf billing messages since the related changes to Sprout and Bono (to report billable events) were not enabled. This version was released to allow systems integrators to get a head start on spinning up and configuring Ralf nodes rather than having to wait for the next release.

0 comments on commit c5a528b

Please sign in to comment.