Skip to content
This repository has been archived by the owner on Feb 27, 2020. It is now read-only.

Commit

Permalink
Jenkins checking in autogenerated rST files
Browse files Browse the repository at this point in the history
  • Loading branch information
AthenaNebula Jenkins committed Sep 7, 2017
1 parent 4974703 commit 5906e0e
Show file tree
Hide file tree
Showing 10 changed files with 78 additions and 47 deletions.
33 changes: 17 additions & 16 deletions autogenerated_rst_docs/Clearwater_Architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,8 +82,8 @@ authentication credentials and user profile information. It can either
master the data (in which case it exposes a web services provisioning
interface) or can pull the data from an IMS compliant HSS over the Cx
interface. The Homestead nodes themselves are stateless - the mastered /
cached subscriber data is all stored on Vellum (via Cassandra's Thrift
interface).
cached subscriber data is all stored on Vellum (Cassandra for the
mastered data, and Astaire/Memcached for the cached data).

In the IMS architecture, the HSS mirror function is considered to be
part of the I-CSCF and S-CSCF components, so in Clearwater I-CSCF and
Expand All @@ -106,23 +106,24 @@ As described above, Vellum is used to maintain all long-lived state in
the dedployment. It does this by running a number of cloud optimized,
distributed storage clusters. -
`Cassandra <http://cassandra.apache.org/>`__. Cassandra is used by
Homestead to store authentication credentials and profile information,
and is used by Homer to store MMTEL service settings. Vellum exposes
Cassandra's Thrift API. - `etcd <https://github.com/coreos/etcd>`__.
etcd is used by Vellum itself to share clustering information between
Vellum nodes and by other nodes in the deployment for shared
configuration. - `Chronos <https://github.com/Metaswitch/chronos>`__.
Chronos is a distributed, redundant, reliable timer service developed by
Clearwater. It is used by Sprout and Ralf nodes to enable timers to be
run (e.g. for SIP Registration expiry) without pinning operations to a
specific node (one node can set the timer and another act on it when it
pops). Chronos is accessed via an HTTP API. -
`Memcached <https://memcached.org/>`__ /
Homestead to store authentication credentials and profile information
when an HSS is not in use, and is used by Homer to store MMTEL service
settings. Vellum exposes Cassandra's Thrift API. -
`etcd <https://github.com/coreos/etcd>`__. etcd is used by Vellum itself
to share clustering information between Vellum nodes and by other nodes
in the deployment for shared configuration. -
`Chronos <https://github.com/Metaswitch/chronos>`__. Chronos is a
distributed, redundant, reliable timer service developed by Clearwater.
It is used by Sprout and Ralf nodes to enable timers to be run (e.g. for
SIP Registration expiry) without pinning operations to a specific node
(one node can set the timer and another act on it when it pops). Chronos
is accessed via an HTTP API. - `Memcached <https://memcached.org/>`__ /
`Astaire <https://github.com/Metaswitch/astaire>`__. Vellum also runs a
Memcached cluster fronted by Astaire. Astaire is a service developed by
Clearwater that enabled more rapid scale up and scale down of memcached
clusters. This cluster is used by Sprout and Ralf for storing
registration and session state.
clusters. This cluster is used by Sprout for storing registration state,
Ralf for storing session state and Homestead for storing cached
subscriber data.

Homer (XDMS)
~~~~~~~~~~~~
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,10 +99,11 @@ settings, you should destroy and recreate then node instead.
must be set explicitly on nodes that colocate function.
- ``remote_cassandra_seeds`` - this is used to connect the Cassandra
cluster in your second site to the Cassandra cluster in your first
site; this is only necessary in a geographically redundant
deployment. It should be set to an IP address of a Vellum node in
your first site, and it should only be set on the first Vellum node
in your second site.
site; this is only necessary in a geographically redundant deployment
which is using at least one of Homestead-Prov, Homer or Memento. It
should be set to an IP address of a Vellum node in your first site,
and it should only be set on the first Vellum node in your second
site.
- ``scscf_node_uri`` - this can be optionally set, and only applies to
nodes running an S-CSCF. If it is configured, it almost certainly
needs configuring on each S-CSCF node in the deployment.
Expand Down Expand Up @@ -226,6 +227,12 @@ file (in the format ``name=value``, e.g. ``home_domain=example.com``).
a non-GR deployment, only one domain is provided (and the site name
is optional). For a GR deployment, each domain is identified by the
site name, and one of the domains must relate to the local site.
- ``homestead_impu_store`` - this is the location of homestead's IMPU
store. It has the format
``<site_name>=<domain>[:<port>][,<site_name>=<domain>[:<port>]]``. In
a non-GR deployment, only one domain is provided (and the site name
is optional). For a GR deployment, each domain is identified by the
site name, and one of the domains must relate to the local site.
- ``memento_auth_store`` - this is the location of Memento's
authorization vector store. It just has the format
``<domain>[:port]``. If not present, defaults to the loopback IP.
Expand Down Expand Up @@ -634,6 +641,9 @@ e.g. ``icscf=5052``).
but increases the volume of data sent to SAS.
- ``dns_timeout`` - The time in milliseconds that Clearwater will wait
for a response from the DNS server (defaults to 200 milliseconds).
- ``homestead_cache_threads`` - The number of threads used by Homestead
for accessing it's subscriber data cache. Defaults to 50x the number
of CPU cores.

Experimental options
~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -713,7 +723,7 @@ the format ``name=value``, e.g. ``log_level=5``).
When this is set to 'Y', it simply accepts all REGISTERs - obviously
this is very insecure and should not be used in production.
- ``num_http_threads`` (homestead) - determines the number of HTTP
worker threads that will be used to process requests. Defaults to 50
worker threads that will be used to process requests. Defaults to 4
times the number of CPU cores on the system.

DNS Config
Expand Down
9 changes: 6 additions & 3 deletions autogenerated_rst_docs/Configuring_GR_deployments.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,12 @@ created, or it's ``site1``.
- Update the Chronos configuration on your Vellum nodes on your
first site to add the GR configuration file - instructions
`here <http://clearwater.readthedocs.io/en/latest/Manual_Install.html#chronos-configuration>`__.
- Update Cassandra's strategy by running
``cw-update_cassandra_strategy`` on any Vellum node in your entire
deployment.
- If you are using any of Homestead-Prov, Homer or Memento:

- Update Cassandra's strategy by running
``cw-update_cassandra_strategy`` on any Vellum node in your
entire deployment.

- At this point, your first and second sites are replicating data
between themselves, but no external traffic is going to your
second site.
Expand Down
2 changes: 1 addition & 1 deletion autogenerated_rst_docs/External_HSS_Integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ mastered in Vellum's Cassandra database.

When Clearwater is deployed with an external HSS, HSS data is queried
from the external HSS via its Cx/Diameter interface and is then cached
in the Cassandra database.
in Memcached on Vellum.

Clearwater uses the following Cx message types.

Expand Down
13 changes: 7 additions & 6 deletions autogenerated_rst_docs/Geographic_redundancy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ Vellum.

Vellum has 3 databases, which support Geographic Redundancy differently:

- The Homestead, Homer and Memento databases are backed by Cassandra,
which is aware of local and remote peers, so these are a single
cluster split across the two geographic regions.
- The Homestead-Prov, Homer and Memento databases are backed by
Cassandra, which is aware of local and remote peers, so these are a
single cluster split across the two geographic regions.
- Chronos is aware of local peers and the remote cluster, and handles
replicating timers across the two sites itself.
- There is one memcached cluster per geographic region. Although
Expand All @@ -42,9 +42,10 @@ Vellum has 3 databases, which support Geographic Redundancy differently:
Sprout nodes use the local Vellum cluster for Chronos and both local and
remote Vellum clusters for memcached (via Astaire). If the Sprout node
includes Memento, then it also uses the local Vellum cluster for
Cassandra. Dime nodes use the local Vellum cluster for Chronos and
Cassandra, and both local and remote Vellum clusters for memcached (via
Astaire).
Cassandra. Dime nodes use the local Vellum cluster for Chronos and both
local and remote Vellum clusters for memcached (via Astaire). If
Homestead-Prov is in use, then it also uses the local Vellum cluster for
Cassandra.

Communications between nodes in different sites should be secure - for
example, if it is going over the public internet rather than a private
Expand Down
3 changes: 2 additions & 1 deletion autogenerated_rst_docs/Handling_Failed_Nodes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,5 +72,6 @@ session for each node that has failed.

- ``sudo cw-mark_node_failed "vellum" "memcached" <failed node IP>``
- ``sudo cw-mark_node_failed "vellum" "chronos" <failed node IP>``
- ``sudo cw-mark_node_failed "vellum" "cassandra" <failed node IP>``

If you are using any of Homestead-Prov, Homer or Memento, also run: \*
``sudo cw-mark_node_failed "vellum" "cassandra" <failed node IP>``
9 changes: 7 additions & 2 deletions autogenerated_rst_docs/Handling_Multiple_Failed_Nodes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -112,8 +112,8 @@ Vellum - Memcached configuration
Vellum - Cassandra configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Check that the Cassandra cluster is healthy by running the following on
a Vellum node:
If you are using any of Homestead-Prov, Homer or Memento, check that the
Cassandra cluster is healthy by running the following on a Vellum node:

::

Expand Down Expand Up @@ -210,6 +210,11 @@ Run these commands on one Vellum node in the affected site:

/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_chronos_cluster vellum
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_memcached_cluster vellum

If you are using any of Homestead-Prov, Homer or Memento, also run:

::

/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_cassandra_cluster vellum

Verify the cluster state is correct in etcd by running sudo
Expand Down
8 changes: 6 additions & 2 deletions autogenerated_rst_docs/Handling_Site_Failure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,17 @@ deployment is available
Recovery
~~~~~~~~

To recover from this situation, all you need to do is remove the failed
Vellum nodes from Cassandra cluster.
If you are using any of Homestead-Prov, Homer or Memento, to recover
from this situation all you need to do is remove the failed Vellum nodes
from Cassandra cluster.

::

* From any Vellum node in the remaining site, run `cw-remove_site_from_cassandra <site ID - the name of the failed site>`

If you are not using any of Homestead-Prov, Homer or Memento, you do not
need to do anything to recover the single remaining site.

You should now have a working single-site cluster, which can continue to
run as a single site, or be safely paired with a new remote site
(details on how to set up a new remote site are
Expand Down
25 changes: 15 additions & 10 deletions autogenerated_rst_docs/Manual_Install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -151,12 +151,14 @@ deployment <Geographic_redundancy.html>`__, then:
- You should set ``local_site_name`` in
``/etc/clearwater/local_config``. The name you choose is arbitrary,
but must be the same for every node in the site. This name will also
be used in the ``remote_site_names``, ``sprout_registration_store``
and ``ralf_session_store`` configuration options set in shared config
(desscribed below).
- On the first Vellum node in the second site, you should set
``remote_cassandra_seeds`` to the IP address of a Vellum node in the
first site.
be used in the ``remote_site_names``, ``sprout_registration_store``,
``homestead_impu_store`` and ``ralf_session_store`` configuration
options set in shared config (desscribed below).
- If your deployment uses Homestead-Prov, Homer or Memento:

- on the first Vellum node in the second site, you should set
``remote_cassandra_seeds`` to the IP address of a Vellum node in
the first site.

Install Node-Specific Software
------------------------------
Expand Down Expand Up @@ -263,6 +265,7 @@ then you don't need to include it.
sprout_registration_store=vellum.<site_name>.<zone>
hs_hostname=hs.<site_name>.<zone>:8888
hs_provisioning_hostname=hs.<site_name>.<zone>:8889
homestead_impu_store=vellum.<zone>
ralf_hostname=ralf.<site_name>.<zone>:10888
ralf_session_store=vellum.<zone>
xdms_hostname=homer.<site_name>.<zone>:7888
Expand Down Expand Up @@ -322,8 +325,9 @@ deployment <Geographic_redundancy.html>`__, some of the options require
information about all sites to be specified. You need to set the
``remote_site_names`` configuration option to include the
``local_site_name`` of each site, replace the
``sprout_registration_store`` and ``ralf_session_store`` with the values
as described in `Clearwater Configuration Options
``sprout_registration_store``, ``homestead_impu_store`` and
``ralf_session_store`` with the values as described in `Clearwater
Configuration Options
Reference <Clearwater_Configuration_Options_Reference.html>`__, and set
the ``sprout_chronos_callback_uri`` and ``ralf_chronos_callback_uri`` to
deployment wide hostnames. For example, for sites named ``siteA`` and
Expand All @@ -332,8 +336,9 @@ deployment wide hostnames. For example, for sites named ``siteA`` and
::

remote_site_names=siteA,siteB
sprout_registration_store="siteA=sprout-siteA.<zone>,siteB=sprout-siteB.<zone>"
ralf_session_store="siteA=ralf-siteA.<zone>,siteB=ralf-siteB.<zone>"
sprout_registration_store="siteA=vellum-siteA.<zone>,siteB=vellum-siteB.<zone>"
homestead_impu_store="siteA=vellum-siteA.<zone>,siteB=vellum-siteB.<zone>"
ralf_session_store="siteA=vellum-siteA.<zone>,siteB=vellum-siteB.<zone>"
sprout_chronos_callback_uri=sprout.<zone>
ralf_chronos_callback_uri=ralf.<zone>

Expand Down
3 changes: 2 additions & 1 deletion autogenerated_rst_docs/Troubleshooting_and_Recovery.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,8 @@ Vellum

Problems on Vellum may include:

- Failing to read or write to the Cassandra database:
- Failing to read or write to the Cassandra database (only relevant if
you deployment is using Homestead-Prov, Homer or Memento):

- Check that Cassandra is running (``sudo monit status``). If not,
check its ``/var/log/cassandra/*.log`` files.
Expand Down

0 comments on commit 5906e0e

Please sign in to comment.