Skip to content
This repository has been archived by the owner on Feb 27, 2020. It is now read-only.

Commit

Permalink
Merge pull request #285 from Metaswitch/bpa_configaccess
Browse files Browse the repository at this point in the history
Bpa configaccess
  • Loading branch information
BennettAllen1 committed Nov 3, 2017
2 parents afab55b + 73fecad commit dbad202
Show file tree
Hide file tree
Showing 14 changed files with 45 additions and 54 deletions.
15 changes: 3 additions & 12 deletions docs/Backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,17 +259,8 @@ To backup the shared configuration:

To restore a previous backup, follow these steps:

* Copy all the files listed above except `shared_config` to `/etc/clearwater` on one of your sprout nodes.
* Run the following commands on that node:

`sudo cw-upload_bgcf_json`
`sudo cw-upload_enum_json`
`sudo cw-upload_scscf_json`
`sudo cw-upload_shared_ifcs_xml`
`sudo cw-upload_fallback_ifcs_xml`

* Run `cw-config download shared_config` to download a copy of the current `shared_config`.
* Copy the backed up version of `shared_config` over the top of the downloaded copy.
* Run `cw-config upload shared_config` to push the config to all the nodes in the cluster.
* Run `cw-config download {config type}` to download a copy of the current `{config type}`.
* Copy the backed up version of `{config type}` over the top of the downloaded copy.
* Run `cw-config upload {config type}` to push the config to all the nodes in the cluster.

See [Modifying Clearwater settings](Modifying_Clearwater_settings.md) for more details on this.
4 changes: 2 additions & 2 deletions docs/CDF_Integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,15 @@ Ralf implements the behavior specified in [RFC3588](http://www.ietf.org/rfc/rfc3

### Configuring the billing realm

To point Ralf at the billing DIAMETER realm, add the following line to `/etc/clearwater/shared_config` and follow [this process](Modifying_Clearwater_settings.md) to apply the change
To point Ralf at the billing DIAMETER realm, add the following line to `shared_config` and follow [this process](Modifying_Clearwater_settings.md) to apply the change

billing_realm=<DIAMETER billing realm>

### Selecting a specific CDF in the realm

_Note:_ Bono only has support for selecting CDF identities based of static configuration of a single identity. Other P-CSCFs may have support for load-balancing or enabling backup CDF identities.

If you have a CDF set up to receive Rf billing messages from your deployment, you will need to modify the `/etc/clearwater/shared_config` file and follow [this process](Modifying_Clearwater_settings.md) to apply the change:
If you have a CDF set up to receive Rf billing messages from your deployment, you will need to modify the `shared_config` file and follow [this process](Modifying_Clearwater_settings.md) to apply the change:

cdf_identity=<CDF DIAMETER Identity>

Expand Down
8 changes: 4 additions & 4 deletions docs/Clearwater_Configuration_Options_Reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ This section describes settings that are common across the entire deployment.

### Core options

This section describes options for the basic configuration of a Clearwater deployment - such as the hostnames of the six node types and external services such as email servers or the Home Subscriber Server. These options should be set in the `/etc/clearwater/shared_config` file (in the format `name=value`, e.g. `home_domain=example.com`).
This section describes options for the basic configuration of a Clearwater deployment - such as the hostnames of the six node types and external services such as email servers or the Home Subscriber Server. These options should be set in a local copy (in the format `name=value`, e.g. `home_domain=example.com`) by running `cw-config download shared_config`, editing this downloaded copy and then running `cw-config upload shared_config` when finished.

* `home_domain` - this is the main SIP domain of the deployment, and determines which SIP URIs Clearwater will treat as local. It will usually be a hostname resolving to all the P-CSCFs (e.g. the Bono nodes). Other domains can be specified through additional_home_domains, but Clearwater will treat this one as the default (for example, when handling `tel:` URIs).
* `sprout_hostname` - a hostname that resolves by DNS round-robin to the signaling interface of all Sprout nodes in the cluster.
Expand Down Expand Up @@ -122,7 +122,7 @@ As a concrete example, below are the S-CSCF options and the default values.

### Advanced options

This section describes optional configuration options, particularly for ensuring conformance with other IMS devices such as HSSes, ENUM servers, application servers with strict requirements on Record-Route headers, and non-Clearwater I-CSCFs. These options should be set in the `/etc/clearwater/shared_config` file (in the format `name=value`, e.g. `icscf=5052`).
This section describes optional configuration options, particularly for ensuring conformance with other IMS devices such as HSSes, ENUM servers, application servers with strict requirements on Record-Route headers, and non-Clearwater I-CSCFs. These options should be set in a local copy (in the format `name=value`, e.g. `icscf=5052`) by running `cw-config download shared_config`, editing this downloaded copy and then running `cw-config upload shared_config` when finished.

* `homestead_provisioning_port` - the HTTP port the homestead provisioning interface on Dime listens on. Defaults to 8889. Not needed when using an external HSS.
* `sas_server` - the IP address or hostname of your Metaswitch Service Assurance Server for call logging and troubleshooting. Optional.
Expand Down Expand Up @@ -224,7 +224,7 @@ This section describes optional configuration options, particularly for ensuring

### Experimental options

This section describes optional configuration options which may be useful, but are not heavily-used or well-tested by the main Clearwater development team. These options should be set in the `/etc/clearwater/shared_config` file (in the format `name=value`, e.g. `ralf_secure_listen_port=12345`).
This section describes optional configuration options which may be useful, but are not heavily-used or well-tested by the main Clearwater development team. These options should be set in a local copy (in the format `name=value`, e.g. `ralf_secure_listen_port=12345`) by running `cw-config download shared_config`, editing this downloaded copy and then running `cw-config upload shared_config` when finished.

* `ralf_secure_listen_port` - this determines the port the ralf process on Dime listens on for TLS-secured Diameter connections.
* `hs_secure_listen_port` - this determines the port the homestead process on Dime listens on for TLS-secured Diameter connections.
Expand All @@ -250,7 +250,7 @@ This section describes settings that may vary between systems in the same deploy

## DNS Config

This section describes the static DNS config which can be used to override DNS results. This is set in `/etc/clearwater/dns.json`. Currently, the only supported record type is CNAME and the only component which uses this is Chronos and the I-CSCF. The file has the format:
This section describes the static DNS config which can be used to override DNS results. These options should be set in a local copy by running `cw-config download dns_json`, editing this downloaded copy and then running `cw-config upload dns_json` when finished. Currently, the only supported record type is CNAME and the only component which uses this is Chronos and the I-CSCF. The file has the format:

{
"hostnames": [
Expand Down
4 changes: 2 additions & 2 deletions docs/Clearwater_DNS_Usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ By default, Clearwater routes all DNS requests through an instance of [dnsmasq](
> equal: it picks the one to use using an algorithm designed to avoid
> nameservers which aren't responding.
If the `signaling_dns_server` option is set in `/etc/clearwater/shared_config` (which is mandatory when using [traffic separation](Multiple_Network_Support.md)), Clearwater will not use dnsmasq. Instead, resiliency is achieved by being able to specify up to three servers in a comma-separated list (e.g. `signaling_dns_server=1.2.3.4,10.0.0.1,192.168.1.1`), and Clearwater will fail over between them as follows:
If the `signaling_dns_server` option is set in `shared_config` (which is mandatory when using [traffic separation](Multiple_Network_Support.md)), Clearwater will not use dnsmasq. Instead, resiliency is achieved by being able to specify up to three servers in a comma-separated list (e.g. `signaling_dns_server=1.2.3.4,10.0.0.1,192.168.1.1`), and Clearwater will fail over between them as follows:

* It will always query the first server in the list first
* If this returns SERVFAIL or times out (which happens after a randomised 500ms-1000ms period), it will resend the query to the second server
Expand Down Expand Up @@ -131,7 +131,7 @@ Bono needs to be able to contact the Sprout nodes in each site, so it needs to h
* `scscf.sprout.siteB.<zone>` (NAPTR, optional) - specifies transport requirements for accessing Sprout - service `SIP+D2T` maps to `_sip._tcp.scscf.sprout.siteB.<zone>`
* `_sip._tcp.scscf.sprout.siteB.<zone>` (SRV) - cluster SRV record for Sprout, resolving to port 5054 for all of the per-node records in siteB


## Configuration

Clearwater can work with any DNS server that meets the [requirements above](#dns-server). However, most of our testing has been performed with
Expand Down
2 changes: 1 addition & 1 deletion docs/Clearwater_stress_testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ The stress test logs to `/var/log/clearwater-sip-stress/sipp.<index>.out`.

There is some extra configuration needed in this mode, so you should:

* set the following properties in `/etc/clearwater/shared_config`:
* set the following properties in `shared_config`:
* (required) `home_domain` - the home domain of the deployment under test
* (optional) `bono_servers` - a list of bono servers in this deployment
* (optional) `stress_target` - the target host (defaults to the `node_idx`-th entry in `bono_servers` or, if there are no `bono_servers`, defaults to `home_domain`)
Expand Down
2 changes: 1 addition & 1 deletion docs/Configuring_an_Application_Server.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ If you want to create a SIP-over-UDP deployment, it will be necessary for all of

To enable SIP-over-UDP, you will need to set the following configuration options.

In `/etc/clearwater/shared_config` set or update the fields:
In `shared_config` set or update the fields:

scscf_uri="sip:scscf.<sprout_hostname>;transport=udp"
disable_tcp_switch=Y
Expand Down
4 changes: 2 additions & 2 deletions docs/ENUM.md
Original file line number Diff line number Diff line change
Expand Up @@ -394,12 +394,12 @@ you can instead change the suffix, e.g. to .e164.arpa.ngv.example.com, by

## ENUM and Sprout

To enable ENUM lookups on Sprout, edit `/etc/clearwater/shared_config` and add the following configuration to use either an ENUM server (recommended) or an ENUM file:
To enable ENUM lookups on Sprout, edit `shared_config` using `cw-config` and add the following configuration to use either an ENUM server (recommended) or an ENUM file:

enum_server=<IP addresses of enum servers>
enum_file=<location of enum file>

If you use the ENUM file, enter the ENUM rules in the JSON format (shown above). If you are using the enhanced node management framework provided by `clearwater-etcd`, and you use `/etc/clearwater/enum.json` as your ENUM filename, you can automatically synchronize changes across your deployment by running `sudo cw-upload_enum_json` after creating or updating the file. In this case, other Sprout nodes will automatically download and use the uploaded ENUM rules.
If you use the ENUM file, enter the ENUM rules in the JSON format (shown above). If you are using the enhanced node management framework provided by `clearwater-etcd`, and you use `/etc/clearwater/enum.json` as your ENUM filename, you can make changes across your deployment by running `cw-config download enum_json` updating the local copy as specified during download and then running `cw-config upload enum_json`. In this case, other Sprout nodes will automatically download and use the uploaded ENUM rules.

It's possible to configure Sprout with secondary and tertiary ENUM servers, by providing a comma-separated list (e.g. `enum_server=1.2.3.4,10.0.0.1,192.168.1.1`). If this is done:

Expand Down
2 changes: 1 addition & 1 deletion docs/External_HSS_Integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Do not configure any Clearwater subscribers via Ellis!

### Enabling external HSS support on an existing deployment

To enable external HSS support, you will need to modify the contents of `/etc/clearwater/shared_config` so that the block that reads
To enable external HSS support, you will need to modify the contents of `shared_config` so that the block that reads

# HSS configuration
hss_hostname=0.0.0.0
Expand Down
12 changes: 6 additions & 6 deletions docs/Handling_Multiple_Failed_Nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The next step is to ensure that the configuration files on each node are correct

#### Any of the master nodes - Shared configuration

The shared configuration is at `/etc/clearwater/shared_config`. Verify that this is correct, then copy this file onto every other master node. Please see the [configuration options reference](http://clearwater.readthedocs.io/en/latest/Clearwater_Configuration_Options_Reference.html) for more details on how to set the configuration values.
The shared configuration is at `/etc/clearwater/shared_config`. Verify that this is correct, then copy this file onto every other master node by using `cw-config`. Please see the [configuration options reference](http://clearwater.readthedocs.io/en/latest/Clearwater_Configuration_Options_Reference.html) for more details on how to set the configuration values.

#### Vellum - Chronos configuration

Expand Down Expand Up @@ -68,16 +68,16 @@ If the Cassandra cluster isn't healthy, you must fix this up before continuing,

Check the JSON configuration files on all Sprout nodes in the affected site:

* Verify that the `/etc/clearwater/enum.json` file is correct, fixing it up if it's not.
* Verify that the `/etc/clearwater/s-cscf.json` file is correct, fixing it up if it's not.
* Verify that the `/etc/clearwater/bgcf.json` file is correct, fixing it up if it's not.
* Verify that the `/etc/clearwater/enum.json` file is correct, fixing it up if it's not using `cw-config`.
* Verify that the `/etc/clearwater/s-cscf.json` file is correct, fixing it up if it's not using `cw-config`.
* Verify that the `/etc/clearwater/bgcf.json` file is correct, fixing it up if it's not using `cw-config`.

### Sprout - XML configuration

Check the XML configuration files on all Sprout nodes in the affected site:

* Verify that the `/etc/clearwater/shared_ifcs.xml` file is correct, fixing it up if it's not.
* Verify that the `/etc/clearwater/fallback_ifcs.xml` file is correct, fixing it up if it's not.
* Verify that the `/etc/clearwater/shared_ifcs.xml` file is correct, fixing it up if it's not using `cw-config`.
* Verify that the `/etc/clearwater/fallback_ifcs.xml` file is correct, fixing it up if it's not using `cw-config`.

Running one of the commands `sudo cw-validate_{shared|fallback}_ifcs_xml` will check if the specified file is syntactically correct.

Expand Down
4 changes: 2 additions & 2 deletions docs/IBCF.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Refer to the [ENUM guide](ENUM.md) for more about how to configure ENUM.

## BGCF Configuration

BGCF (Border Gateway Control Function) configuration is stored in the `bgcf.json` file in `/etc/clearwater` on each Sprout node (both I- and S-CSCF). The `bgcf.json` file stores two types of mappings.
BGCF (Border Gateway Control Function) configuration is stored in the `bgcf.json` file in `/etc/clearwater` on each Sprout node (both I- and S-CSCF). The `bgcf.json` file stores two types of mappings. To edit a copy of `bgcf.json` you must download a local copy using `cw-config download bgcf_json`.

- The first maps from SIP trunk IP addresses and/or domain names to IBCF SIP URIs
- The second maps from a telephone number (contained in the `rn` parameter of a Tel URI, the `rn` parameter in a SIP URI, a TEL URI or the user part of a SIP URI with a user=phone parameter) to an IBCF SIP URI using prefix matching on the number.
Expand Down Expand Up @@ -78,4 +78,4 @@ There can be only one route set for any given SIP trunk IP address or domain nam

A default route set can be configured by having an entry where the domain is set to `"*"`. This will be used by the BGCF if it is trying to route based on the the domain and there's no explicit match for the domain in the configuration, or if it is trying to route based on a telephone number and there's no explicit match for the number in the configuration.

After making a change to this file you should run `sudo cw-upload_bgcf_json` to ensure the change is synchronized to other Sprout nodes on your system (including nodes added in the future).
After making a change to this file you should run `cw-config upload bgcf_json` to ensure the change is synchronized to other Sprout nodes on your system (including nodes added in the future).

0 comments on commit dbad202

Please sign in to comment.