Skip to content
This repository has been archived by the owner on Feb 27, 2020. It is now read-only.

Commit

Permalink
Merge pull request #284 from Metaswitch/rogers
Browse files Browse the repository at this point in the history
Update references due to astaire/rogers split
  • Loading branch information
richardwhiuk committed Nov 1, 2017
2 parents fd0d594 + 9dc88df commit 637e414
Show file tree
Hide file tree
Showing 4 changed files with 21 additions and 8 deletions.
19 changes: 16 additions & 3 deletions docs/Clearwater_Architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,22 @@ Dime nodes run Clearwater's Homestead and Ralf components.

#### Homestead (HSS Cache)

Homestead provides a web services interface to Sprout for retrieving authentication credentials and user profile information. It can either master the data (in which case it exposes a web services provisioning interface) or can pull the data from an IMS compliant HSS over the Cx interface. The Homestead nodes themselves are stateless - the mastered / cached subscriber data is all stored on Vellum (Cassandra for the mastered data, and Astaire/Memcached for the cached data).
Homestead provides a web services interface to Sprout for retrieving
authentication credentials and user profile information. It can either use a
local master of the data, provisioned by Homestead Prov, or it can pull the
data from an IMS compliant HSS over the Cx interface. The Homestead processes
themselves are stateless - the subscriber data is all stored on Vellum
(Cassandra for the mastered data, and Astaire/Rogers/Memcached for the cached
data).

In the IMS architecture, the HSS mirror function is considered to be part of the I-CSCF and S-CSCF components, so in Clearwater I-CSCF and S-CSCF function is implemented with a combination of Sprout and Dime clusters.
In the IMS architecture, the HSS mirror function is considered to be part of
the I-CSCF and S-CSCF components, so in Clearwater I-CSCF and S-CSCF function
is implemented with a combination of Sprout and Dime clusters.

#### Homestead Prov (Local Subscriber Store Provisioning API)

Homestead Prov exposes exposes a web services provisioning interface to allow
provisioning of subscriber data in Cassandra on Vellum

#### Ralf (CTF)

Expand All @@ -49,7 +62,7 @@ As described above, Vellum is used to maintain all long-lived state in the dedpl
- [Cassandra](http://cassandra.apache.org/). Cassandra is used by Homestead to store authentication credentials and profile information when an HSS is not in use, and is used by Homer to store MMTEL service settings. Vellum exposes Cassandra's Thrift API.
- [etcd](https://github.com/coreos/etcd). etcd is used by Vellum itself to share clustering information between Vellum nodes and by other nodes in the deployment for shared configuration.
- [Chronos](https://github.com/Metaswitch/chronos). Chronos is a distributed, redundant, reliable timer service developed by Clearwater. It is used by Sprout and Ralf nodes to enable timers to be run (e.g. for SIP Registration expiry) without pinning operations to a specific node (one node can set the timer and another act on it when it pops). Chronos is accessed via an HTTP API.
- [Memcached](https://memcached.org/) / [Astaire](https://github.com/Metaswitch/astaire). Vellum also runs a Memcached cluster fronted by Astaire. Astaire is a service developed by Clearwater that enabled more rapid scale up and scale down of memcached clusters. This cluster is used by Sprout for storing registration state, Ralf for storing session state and Homestead for storing cached subscriber data.
- [Memcached](https://memcached.org/) / [Astaire and Rogers](https://github.com/Metaswitch/astaire). Vellum also runs a Memcached cluster fronted by Rogers, with synchronization provided by Astaire. This cluster is used by Sprout for storing registration state, Ralf for storing session state and Homestead for storing cached subscriber data. Astaire is a service developed by Clearwater that enabled more rapid scale up and scale down of memcached clusters. Rogers is a proxy which sits in front of a cluster of memcached instances to provide replication of data and topology hiding. Astaire and Rogers work together to ensure that all data is duplicated across multiple nodes, to protect against data loss during a memcached instance failuue

### Homer (XDMS)

Expand Down
2 changes: 1 addition & 1 deletion docs/Clearwater_Configuration_Options_Reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ You should follow [this process](Modifying_Clearwater_settings.md) when changing
* Homer - `sudo service homer stop`
* Ellis - `sudo service ellis stop`
* Memento - `sudo service memento stop`
* Vellum - `sudo service astaire stop`
* Vellum - `sudo service astaire stop && sudo service rogers stop`

## Local Config

Expand Down
2 changes: 1 addition & 1 deletion docs/Clearwater_IP_Port_Usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ They also need the following ports open to all Sprout and Dime nodes:

TCP/7253

* Astaire:
* Rogers:

TCP/11311

Expand Down
6 changes: 3 additions & 3 deletions docs/Geographic_redundancy.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@ Vellum has 3 databases, which support Geographic Redundancy differently:

* The Homestead-Prov, Homer and Memento databases are backed by Cassandra, which is aware of local and remote peers, so these are a single cluster split across the two geographic regions.
* Chronos is aware of local peers and the remote cluster, and handles replicating timers across the two sites itself.
* There is one memcached cluster per geographic region. Although memcached itself does not support the concept of local and remote peers, Vellum runs Astaire as a memcached proxy which allows Sprout and Dime nodes to build geographic redundancy on top - writing to both local and remote clusters, and reading from the local but falling back to the remote.
* There is one memcached cluster per geographic region. Although memcached itself does not support the concept of local and remote peers, Vellum runs Rogers as a memcached proxy which allows Sprout and Dime nodes to build geographic redundancy on top - writing to both local and remote clusters, and reading from the local but falling back to the remote.

Sprout nodes use the local Vellum cluster for Chronos and both local and remote Vellum clusters for memcached (via Astaire). If the Sprout node includes Memento, then it also uses the local Vellum cluster for Cassandra.
Dime nodes use the local Vellum cluster for Chronos and both local and remote Vellum clusters for memcached (via Astaire). If Homestead-Prov is in use, then it also uses the local Vellum cluster for Cassandra.
Sprout nodes use the local Vellum cluster for Chronos and both local and remote Vellum clusters for memcached (via Rogers). If the Sprout node includes Memento, then it also uses the local Vellum cluster for Cassandra.
Dime nodes use the local Vellum cluster for Chronos and both local and remote Vellum clusters for memcached (via Rogers). If Homestead-Prov is in use, then it also uses the local Vellum cluster for Cassandra.

Communications between nodes in different sites should be secure - for example, if it is going over the public internet rather than a private connection between datacenters, it should be encrypted and authenticated with (something like) IPsec.

Expand Down

0 comments on commit 637e414

Please sign in to comment.