Skip to content

Commit

Permalink
Update mentions of Citus Cloud (#874)
Browse files Browse the repository at this point in the history
* Move cloud section lower

Include small deprecation notice on all cloud pages

* Remove infrastructure providers claim

* Remove articles promoting Cloud

* Replace cloud with azure on the main page

* FAQ items

* Remove cloud from enterprise notes

Later can selectively add Hyperscale to these warnings. Or even
better, restructure docs pages to separate community and enterprise
features.

* Misc hyperscale substitutions
  • Loading branch information
jonels-msft committed Aug 26, 2019
1 parent 3dbd060 commit d1eceda
Show file tree
Hide file tree
Showing 26 changed files with 61 additions and 458 deletions.
8 changes: 4 additions & 4 deletions admin_guide/cluster_management.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ The new node is available for shards of new distributed tables. Existing shards
Rebalance Shards without Downtime
---------------------------------

If you want to move existing shards to a newly added worker, Citus Enterprise and Citus Cloud provide a :ref:`rebalance_table_shards` function to make it easier. This function will move the shards of a given table to distribute them evenly among the workers.
If you want to move existing shards to a newly added worker, Citus Enterprise provides a :ref:`rebalance_table_shards` function to make it easier. This function will move the shards of a given table to distribute them evenly among the workers.

.. code-block:: postgresql
Expand Down Expand Up @@ -239,13 +239,13 @@ Tenant Isolation

.. note::

Tenant isolation is a feature of **Citus Enterprise Edition** and :ref:`Citus Cloud <cloud_overview>` only.
Tenant isolation is a feature of **Citus Enterprise Edition** only.

Citus places table rows into worker shards based on the hashed value of the rows' distribution column. Multiple distribution column values often fall into the same shard. In the Citus multi-tenant use case this means that tenants often share shards.

However sharing shards can cause resource contention when tenants differ drastically in size. This is a common situation for systems with a large number of tenants -- we have observed that the size of tenant data tend to follow a Zipfian distribution as the number of tenants increases. This means there are a few very large tenants, and many smaller ones. To improve resource allocation and make guarantees of tenant QoS it is worthwhile to move large tenants to dedicated nodes.

Citus Enterprise Edition and :ref:`Citus Cloud <cloud_overview>` provide the tools to isolate a tenant on a specific node. This happens in two phases: 1) isolating the tenant's data to a new dedicated shard, then 2) moving the shard to the desired node. To understand the process it helps to know precisely how rows of data are assigned to shards.
Citus Enterprise Edition provides the tools to isolate a tenant on a specific node. This happens in two phases: 1) isolating the tenant's data to a new dedicated shard, then 2) moving the shard to the desired node. To understand the process it helps to know precisely how rows of data are assigned to shards.

Every shard is marked in Citus metadata with the range of hashed values it contains (more info in the reference for :ref:`pg_dist_shard <pg_dist_shard>`). The Citus UDF :code:`isolate_tenant_to_new_shard(table_name, tenant_id)` moves a tenant into a dedicated shard in three steps:

Expand Down Expand Up @@ -312,7 +312,7 @@ Viewing Query Statistics

.. note::

The citus_stat_statements view is a feature of **Citus Enterprise Edition** and :ref:`Citus Cloud <cloud_overview>` only.
The citus_stat_statements view is a feature of **Citus Enterprise Edition** only.

When administering a Citus cluster it's useful to know what queries users are running, which nodes are involved, and which execution method Citus is using for each query. Citus records query statistics in a metadata view called :ref:`citus_stat_statements <citus_stat_statements>`, named analogously to Postgres' `pg_stat_statments <https://www.postgresql.org/docs/current/static/pgstatstatements.html>`_. Whereas pg_stat_statements stores info about query duration and I/O, citus_stat_statements stores info about Citus execution methods and shard partition keys (when applicable).

Expand Down
2 changes: 1 addition & 1 deletion arch/mx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Citus MX
========

Citus MX is a new version of Citus that adds the ability to use hash-distributed tables from any node in a Citus cluster, which allows you to scale out your query throughput by opening many connections across all the nodes. This is particularly useful for performing small reads and writes at a very high rate in a way that scales horizontally. Citus MX is currently available in Citus Enterprise Edition and on `Citus Cloud <https://www.citusdata.com/product/cloud>`_.
Citus MX is a new version of Citus that adds the ability to use hash-distributed tables from any node in a Citus cluster, which allows you to scale out your query throughput by opening many connections across all the nodes. This is particularly useful for performing small reads and writes at a very high rate in a way that scales horizontally. Citus MX is currently available in Citus Enterprise Edition.

In the Citus MX architecture, all nodes are PostgreSQL servers running the Citus extension. One node is acting as coordinator and the others as data nodes, each node also has a hot standby that automatically takes over in case of failure. The coordinator is the authoritative source of metadata for the cluster and data nodes store the actual data in shards. Distributed tables can only be created, altered, or dropped via the coordinator, but can be queried from any node. When making changes to a table (e.g. adding a column), the metadata for the distributed tables is propagated to the workers using PostgreSQL’s built-in 2PC mechanism and distributed locks. This ensures that the metadata is always consistent such that every node can run distributed queries in a reliable way.

Expand Down
107 changes: 0 additions & 107 deletions articles/heroku_addon.rst

This file was deleted.

2 changes: 0 additions & 2 deletions articles/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,8 @@ Related Articles
.. toctree::
:maxdepth: 1

heroku_addon.rst
efficient_rollup.rst
hll_count_distinct.rst
scale_on_aws.rst
parallel_indexing.rst
aggregation.rst
outer_joins.rst
Expand Down

0 comments on commit d1eceda

Please sign in to comment.