Skip to content
This repository has been archived by the owner on Feb 27, 2020. It is now read-only.

Commit

Permalink
Merge pull request #38 from Metaswitch/add_md_suffix
Browse files Browse the repository at this point in the history
[Reviewer: Alex] Add .md suffix to internal links
  • Loading branch information
rkd-msw committed May 11, 2015
2 parents 18e2db9 + 8b0c0b3 commit c395825
Show file tree
Hide file tree
Showing 39 changed files with 145 additions and 145 deletions.
6 changes: 3 additions & 3 deletions docs/All_in_one_EC2_AMI_Installation.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# All-in-one EC2 AMI Installation

This pages describes how to launch and run an [all-in-one image](All_in_one_Images) in Amazon's EC2 environment.
This pages describes how to launch and run an [all-in-one image](All_in_one_Images.md) in Amazon's EC2 environment.

## Launch Process

Project Clearwater's all-in-one node is already available as a pre-built AMI, which can be found in the Community AMIs list on the US East region of EC2. Launching this follows exactly the same process as for other EC2 AMIs.

Before you launch the node, you will need an EC2 keypair, and a security group configured to provide access to the [required ports](Clearwater_IP_Port_Usage).
Before you launch the node, you will need an EC2 keypair, and a security group configured to provide access to the [required ports](Clearwater_IP_Port_Usage.md).

To launch the node

Expand All @@ -22,4 +22,4 @@ On the last page, press "Launch", and wait for the node to be started by EC2.

Once the node has launched, you can SSH to it using the keypair you supplied at launch time, and username `ubuntu`.

You can then try [making your first call](Making_your_first_call) and [running the live tests](Running_the_live_tests) - for these you will need the signup key, which is `secret`. You will probably want to change this to a more secure value - see ["Modifying Clearwater settings"](Modifying_Clearwater_settings) for how to do this.
You can then try [making your first call](Making_your_first_call.md) and [running the live tests](Running_the_live_tests.md) - for these you will need the signup key, which is `secret`. You will probably want to change this to a more secure value - see ["Modifying Clearwater settings"](Modifying_Clearwater_settings.md) for how to do this.
6 changes: 3 additions & 3 deletions docs/All_in_one_Images.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# All-in-one Images

While Clearwater is designed to be massively horizontally scalable, it is also possible to install all Clearwater components on a single node. This makes installation much simpler, and is useful for familiarizing yourself with Clearwater before moving up to a larger-scale deployment using one of the [other installation methods](Installation_Instructions).
While Clearwater is designed to be massively horizontally scalable, it is also possible to install all Clearwater components on a single node. This makes installation much simpler, and is useful for familiarizing yourself with Clearwater before moving up to a larger-scale deployment using one of the [other installation methods](Installation_Instructions.md).

This page describes the all-in-one images, their capabilities and restrictions and the installation options available.

Expand Down Expand Up @@ -31,8 +31,8 @@ The key restrictions of all-in-one images are

All-in-one images can be installed on EC2 or on your own virtualization platform, as long as it supports [OVF (Open Virtualization Format)](http://dmtf.org/standards/ovf).

* To install on EC2, follow the [all-in-one EC2 AMI installation instructions](All_in_one_EC2_AMI_Installation).
* To install on your own virtualization platform, follow the [all-in-one OVF installation instructions](All_in_one_OVF_Installation).
* To install on EC2, follow the [all-in-one EC2 AMI installation instructions](All_in_one_EC2_AMI_Installation.md).
* To install on your own virtualization platform, follow the [all-in-one OVF installation instructions](All_in_one_OVF_Installation.md).

## Manual Build

Expand Down
6 changes: 3 additions & 3 deletions docs/All_in_one_OVF_Installation.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# All-in-one OVF Installation

This pages describes how to install an [all-in-one image](All_in_one_Images) on your own virtualization platform using [OVF (Open Virtualization Format)](http://dmtf.org/standards/ovf).
This pages describes how to install an [all-in-one image](All_in_one_Images.md) on your own virtualization platform using [OVF (Open Virtualization Format)](http://dmtf.org/standards/ovf).

## Supported Platforms

Expand Down Expand Up @@ -33,7 +33,7 @@ If you attach to the console, you should see an Ubuntu loading screen and then b
The OVF provides 3 network services.

* SSH - username is `ubuntu` and password is `cw-aio`
* HTTP to ellis for subscriber management - sign-up code is `secret`. You will probably want to change this to a more secure value - see ["Modifying Clearwater settings"](Modifying_Clearwater_settings) for how to do this.
* HTTP to ellis for subscriber management - sign-up code is `secret`. You will probably want to change this to a more secure value - see ["Modifying Clearwater settings"](Modifying_Clearwater_settings.md) for how to do this.
* SIP to bono for call signaling - credentials are provisioned through ellis.

How these network services are exposed can vary depending on the capabilities of the platform.
Expand All @@ -44,4 +44,4 @@ How these network services are exposed can vary depending on the capabilities of

* VMware ESXi runs the host as normal on the network, so you can connect to it directly. To find out its IP address, log in over the console and type `hostname -I`. To access ellis, just point your browser at this IP address. To register over SIP, you'll need to configure an outbound proxy for this IP address.

Once you've successfully connected to ellis, try [making your first call](Making_your_first_call) - just remember to configure the SIP outbound proxy as discussed above.
Once you've successfully connected to ellis, try [making your first call](Making_your_first_call.md) - just remember to configure the SIP outbound proxy as discussed above.
2 changes: 1 addition & 1 deletion docs/Application_Server_Guide.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Application Server Guide
========================

You can add new call features or functionality to calls by adding an application server. Clearwater supports application servers through the standard IMS interface ISC. This article explains the features and limitations of this support. See [Configuring an Application Server](Configuring_an_Application_Server) for details of how to configure Clearwater to use this function, or [Plivo](Plivo) for an open-source application server/framework to work with.
You can add new call features or functionality to calls by adding an application server. Clearwater supports application servers through the standard IMS interface ISC. This article explains the features and limitations of this support. See [Configuring an Application Server](Configuring_an_Application_Server.md) for details of how to configure Clearwater to use this function, or [Plivo](Plivo.md) for an open-source application server/framework to work with.

What is an application server?
==============================
Expand Down
16 changes: 8 additions & 8 deletions docs/Automated_Install.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Automated Install Instructions

These instructions will take you through preparing for an automated install of Clearwater using Chef. For a high level look at the install process, and a discussion of the various install methods, see [Installation Instructions](Installation_Instructions). The automated install is the suggested method for installing a large-scale deployment of Clearwater. It can also be used to install an all-in-one node.
These instructions will take you through preparing for an automated install of Clearwater using Chef. For a high level look at the install process, and a discussion of the various install methods, see [Installation Instructions](Installation_Instructions.md). The automated install is the suggested method for installing a large-scale deployment of Clearwater. It can also be used to install an all-in-one node.

The automated install is only supported for deployments running in Amazon's EC2 cloud, where DNS is being provided by Amazon's Route53 service. If your proposed deployment doesn't meet these requirements, you should use the [Manual Install](Manual_Install) instructions instead.
The automated install is only supported for deployments running in Amazon's EC2 cloud, where DNS is being provided by Amazon's Route53 service. If your proposed deployment doesn't meet these requirements, you should use the [Manual Install](Manual_Install.md) instructions instead.

## The Install Process

Expand All @@ -15,16 +15,16 @@ Once the first phase has been completed, multiple deployments (of various sizes)

The first phase:

* [Installing a Chef server](Installing_a_Chef_server)
* [Installing a Chef server](Installing_a_Chef_server.md)
- This server will track the created Clearwater nodes and allow the client access to them.
* [Configuring a Chef client](Installing_a_Chef_client)
* [Configuring a Chef client](Installing_a_Chef_client.md)
- This machine will be the one on which deployments will be defined and managed.

The second phase:

* [Creating a deployment environment](Creating_a_deployment_environment)
* [Creating a deployment environment](Creating_a_deployment_environment.md)
- The automated install supports the existence and management of multiple deployments simultaneously, each deployment lives in an environment to keep them separate.
* [Creating the deployment](Creating_a_deployment_with_Chef)
* [Creating the deployment](Creating_a_deployment_with_Chef.md)
- Actually provisioning the servers, installing the Clearwater software and configuring DNS.

## Next steps
Expand All @@ -33,5 +33,5 @@ Once you've followed the instructions above, your Clearwater deployment is ready

To test the deployment, you can try making some real calls, or run the provided live test framework.

* [Making your first call](Making_your_first_call)
* [Running the live test framework](Running_the_live_tests)
* [Making your first call](Making_your_first_call.md)
* [Running the live test framework](Running_the_live_tests.md)
2 changes: 1 addition & 1 deletion docs/Backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This document describes
* the periodic automated local backup behavior
* how to restore from a backup.

Note that if your Clearwater deployment is [integrated with an external HSS](External_HSS_Integration), the HSS is the master of ellis and homestead's data, and those nodes do not need to be backed up. However, homer's data still needs to be backed up.
Note that if your Clearwater deployment is [integrated with an external HSS](External_HSS_Integration.md), the HSS is the master of ellis and homestead's data, and those nodes do not need to be backed up. However, homer's data still needs to be backed up.

## Listing Backups

Expand Down
2 changes: 1 addition & 1 deletion docs/CDF_Integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This section discusses how to enable Rf billing to a given CDF.

Before connecting your deployment to a CDF, you must

* [install Clearwater](Installation_Instructions)
* [install Clearwater](Installation_Instructions.md)
* install an external CDF - details for this will vary depending on which CDF you are using.
* ensure your CDF's firewall allows incoming connections from the nodes in the Ralf cluster on the DIAMETER port (default 3868).

Expand Down
2 changes: 1 addition & 1 deletion docs/Cacti.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This document describes how to

### Setting up a Cacti node

Assuming you've followed the [Automated Chef install](Automated_Install),
Assuming you've followed the [Automated Chef install](Automated_Install.md),
here are the steps to create and configure a Cacti node:

1. use knife box create to create a Cacti node - `knife box create -E
Expand Down
4 changes: 2 additions & 2 deletions docs/Clearwater_DNS_Usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This document describes
* Clearwater's DNS strategy and requirements
* how to configure [AWS Route 53](http://aws.amazon.com/route53/) and [BIND](https://www.isc.org/downloads/bind/) to meet these.

DNS is also used as part of the [ENUM](http://tools.ietf.org/rfc/rfc6116.txt) system for mapping E.164 numbers to SIP URIs. This isn't discussed in this document - instead see the separate [ENUM](enum) document.
DNS is also used as part of the [ENUM](http://tools.ietf.org/rfc/rfc6116.txt) system for mapping E.164 numbers to SIP URIs. This isn't discussed in this document - instead see the separate [ENUM](enum.md) document.

*If you are installing an All-in-One Clearwater node, you do not need any DNS records and can ignore the rest of this page.*

Expand Down Expand Up @@ -122,7 +122,7 @@ The UEs need to know the identity of the DNS server too. In a testing environme

### AWS Route 53

Clearwater's [automated install](Automated_Install) automatically configures AWS Route 53. There is no need to follow the following instructions if you are using the automated install.
Clearwater's [automated install](Automated_Install.md) automatically configures AWS Route 53. There is no need to follow the following instructions if you are using the automated install.

The official [AWS Route 53 documentation](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a good reference, and most of the following steps are links into it.

Expand Down
12 changes: 6 additions & 6 deletions docs/Clearwater_Elastic_Scaling.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
The core Clearwater nodes have the ability to elastically scale; in other words, you can grow and shrink your deployment on demand, without disrupting calls or losing data.

This page explains how to use this elastic scaling function when using a deployment created through the [automated](Automated Install) or [manual](Manual Install) install processes. Note that, although the instructions differ between the automated and manual processes, the underlying operations that will be performed on your deployment are the same - the automated process simply uses chef to drive this rather than issuing the commands manually.
This page explains how to use this elastic scaling function when using a deployment created through the [automated](Automated Install.md) or [manual](Manual Install.md) install processes. Note that, although the instructions differ between the automated and manual processes, the underlying operations that will be performed on your deployment are the same - the automated process simply uses chef to drive this rather than issuing the commands manually.

## Before scaling your deployment

Expand All @@ -20,15 +20,15 @@ Where the `<n>` values are how many nodes of each type you need. Once this comm

If you're scaling up your manual deployment, follow the following process.

1. Spin up new nodes, following the [standard install process](Manual Install).
1. Spin up new nodes, following the [standard install process](Manual Install.md).
2. On Sprout, Memento and Ralf nodes, update `/etc/clearwater/cluster_settings` to contain both a list of the old nodes (`servers=...`) and a (longer) list of the new nodes (`new_servers=...`) and then run `service <process> reload` to re-read this file.
3. On new Memento, Homestead and Homer nodes, follow the [instructions on the Cassandra website](http://www.datastax.com/documentation/cassandra/1.2/cassandra/operations/ops_add_node_to_cluster_t.html) to join the new nodes to the existing cluster.
4. On Sprout and Ralf nodes, update `/etc/chronos/chronos.conf` to contain a list of all the nodes (see [here](https://github.com/Metaswitch/chronos/blob/dev/doc/clustering.md) for details of how to do this) and then run `service chronos reload` to re-read this file.
5. On Sprout, Memento and Ralf nodes, run `service astaire reload` to start resynchronization.
6. On Sprout and Ralf nodes, run `service chronos resync` to start resynchronization of Chronos timers.
7. Update DNS to contain the new nodes.
8. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics).
9. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics).
8. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
9. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
10. On all nodes, update /etc/clearwater/cluster_settings to just contain the new list of nodes (`servers=...`) and then run `service <process> reload` to re-read this file.

If you're scaling down your manual deployment, follow the following process.
Expand All @@ -39,8 +39,8 @@ If you're scaling down your manual deployment, follow the following process.
4. On Sprout and Ralf nodes, update `/etc/chronos/chronos.conf` to mark the nodes that are being scaled down as leaving (see [here](https://github.com/Metaswitch/chronos/blob/dev/doc/clustering.md) for details of how to do this) and then run `service chronos reload` to re-read this file.
5. On Sprout, Memento and Ralf nodes, run `service astaire reload` to start resynchronization.
6. On the Sprout and Ralf nodes that are staying in the Chronos cluster, run `service chronos resync` to start resynchronization of Chronos timers.
7. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics).
8. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics).
7. On Sprout, Memento and Ralf nodes, wait until Astaire has resynchronized, either by running `service astaire wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
8. On Sprout and Ralf nodes, wait until Chronos has resynchronized, either by running `service chronos wait-sync` or by polling over [SNMP](Clearwater SNMP Statistics.md).
9. On Sprout, Memento and Ralf nodes, update /etc/clearwater/cluster_settings to just contain the new list of nodes (`servers=...`) and then run `service <process> reload` to re-read this file.
10. On the Sprout and Ralf nodes that are staying in the cluster, update `/etc/chronos/chronos.conf` so that it only contains entries for the staying nodes in the cluster and then run `service chronos reload` to re-read this file.
11. On the nodes that are about to be turned down, run `monit unmonitor <process> && service <process> quiesce|stop` to start the main process quiescing.
Expand Down
2 changes: 1 addition & 1 deletion docs/Clearwater_Ruby_Coding_Guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,4 +108,4 @@ Strongly based on https://github.com/chneukirchen/styleguide/ with some local ch
* Do not program defensively.
* Keep the code simple.
* Be consistent.
* Use common sense.
* Use common sense.
4 changes: 2 additions & 2 deletions docs/Clearwater_Tour.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ We have now registered for the new line.
Connecting an Android phone
---------------------------

See [here](Configuring_the_native_Android_SIP_client) for instructions.
See [here](Configuring_the_native_Android_SIP_client.md) for instructions.

Making calls
------------
Expand Down Expand Up @@ -109,7 +109,7 @@ A brief note on supported dialing formats:
WebRTC support
--------------

See [WebRTC support in Clearwater](WebRTC_support_in_Clearwater) for
See [WebRTC support in Clearwater](WebRTC_support_in_Clearwater.md) for
how to use a browser instead of a SIP phone as a client.

VoLTE call services
Expand Down

0 comments on commit c395825

Please sign in to comment.