Skip to content
This repository has been archived by the owner on Feb 27, 2020. It is now read-only.

Commit

Permalink
Merge pull request #59 from Metaswitch/more_link_fixes
Browse files Browse the repository at this point in the history
[Reviewer: Graeme] Fix broken links until 'mkdocs build' runs cleanly
  • Loading branch information
rkd-msw committed May 29, 2015
2 parents a3b7017 + e88b060 commit 02d88fa
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion docs/Clearwater_DNS_Usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This document describes
* Clearwater's DNS strategy and requirements
* how to configure [AWS Route 53](http://aws.amazon.com/route53/) and [BIND](https://www.isc.org/downloads/bind/) to meet these.

DNS is also used as part of the [ENUM](http://tools.ietf.org/rfc/rfc6116.txt) system for mapping E.164 numbers to SIP URIs. This isn't discussed in this document - instead see the separate [ENUM](enum.md) document.
DNS is also used as part of the [ENUM](http://tools.ietf.org/rfc/rfc6116.txt) system for mapping E.164 numbers to SIP URIs. This isn't discussed in this document - instead see the separate [ENUM](ENUM.md) document.

*If you are installing an All-in-One Clearwater node, you do not need any DNS records and can ignore the rest of this page.*

Expand Down
12 changes: 6 additions & 6 deletions docs/Clearwater_Elastic_Scaling.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
The core Clearwater nodes have the ability to elastically scale; in other words, you can grow and shrink your deployment on demand, without disrupting calls or losing data.

This page explains how to use this elastic scaling function when using a deployment created through the [automated](Automated Install.md) or [manual](Manual Install.md) install processes. Note that, although the instructions differ between the automated and manual processes, the underlying operations that will be performed on your deployment are the same - the automated process simply uses chef to drive this rather than issuing the commands manually.
This page explains how to use this elastic scaling function when using a deployment created through the [automated](Automated_Install.md) or [manual](Manual_Install.md) install processes. Note that, although the instructions differ between the automated and manual processes, the underlying operations that will be performed on your deployment are the same - the automated process simply uses chef to drive this rather than issuing the commands manually.

## Before scaling your deployment

Expand All @@ -20,15 +20,15 @@ Where the `<n>` values are how many nodes of each type you need. Once this comm

If you're scaling up your manual deployment, follow the following process.

1. Spin up new nodes, following the [standard install process](Manual Install.md).
1. Spin up new nodes, following the [standard install process](Manual_Install.md).
2. On Sprout, Memento and Ralf nodes, update `/etc/clearwater/cluster_settings` to contain both a list of the old nodes (`servers=...`) and a (longer) list of the new nodes (`new_servers=...`) and then run `service <process> reload` to re-read this file.
3. On new Memento, Homestead and Homer nodes, follow the [instructions on the Cassandra website](http://www.datastax.com/documentation/cassandra/1.2/cassandra/operations/ops_add_node_to_cluster_t.html) to join the new nodes to the existing cluster.
4. On Sprout and Ralf nodes, update `/etc/chronos/chronos.conf` to contain a list of all the nodes (see [here](https://github.com/Metaswitch/chronos/blob/dev/doc/clustering.md) for details of how to do this) and then run `service chronos reload` to re-read this file.
5. On Sprout, Memento and Ralf nodes, run `service astaire reload` to start resynchronization.
6. On Sprout and Ralf nodes, run `service chronos resync` to start resynchronization of Chronos timers.
7. Update DNS to contain the new nodes.
8. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
9. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
8. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater_SNMP_Statistics.md).
9. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater_SNMP_Statistics.md).
10. On all nodes, update /etc/clearwater/cluster_settings to just contain the new list of nodes (`servers=...`) and then run `service <process> reload` to re-read this file.

If you're scaling down your manual deployment, follow the following process.
Expand All @@ -39,8 +39,8 @@ If you're scaling down your manual deployment, follow the following process.
4. On Sprout and Ralf nodes, update `/etc/chronos/chronos.conf` to mark the nodes that are being scaled down as leaving (see [here](https://github.com/Metaswitch/chronos/blob/dev/doc/clustering.md) for details of how to do this) and then run `service chronos reload` to re-read this file.
5. On Sprout, Memento and Ralf nodes, run `service astaire reload` to start resynchronization.
6. On the Sprout and Ralf nodes that are staying in the Chronos cluster, run `service chronos resync` to start resynchronization of Chronos timers.
7. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
8. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
7. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater_SNMP_Statistics.md).
8. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater_SNMP_Statistics.md).
9. On Sprout, Memento and Ralf nodes, update /etc/clearwater/cluster_settings to just contain the new list of nodes (`servers=...`) and then run `service <process> reload` to re-read this file.
10. On the Sprout and Ralf nodes that are staying in the cluster, update `/etc/chronos/chronos.conf` so that it only contains entries for the staying nodes in the cluster and then run `service chronos reload` to re-read this file.
11. On the nodes that are about to be turned down, run `monit unmonitor -g <process> && service <process> quiesce|stop` to start the main process quiescing.
Expand Down

0 comments on commit 02d88fa

Please sign in to comment.