diff --git a/Jenkinsfile b/Jenkinsfile index 5086bc077..9ff199aa7 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -31,7 +31,6 @@ pipeline { steps { container('python-312') { sh ''' -git fetch --unshallow python3 -m venv . . bin/activate pip3 install -r requirements.txt diff --git a/source/_includes/_admincli/docs-tables.rst b/source/_includes/_admincli/docs-tables.rst new file mode 100644 index 000000000..a8b3983d2 --- /dev/null +++ b/source/_includes/_admincli/docs-tables.rst @@ -0,0 +1,19 @@ +The following table shows the keys and their default values to +configure the maximum file size of documents that |docs| can manage +and open. + +.. _docs-sizeopt: + +.. card:: File sizes + + The following values can be modified via the |mesh| interface (see + Section :ref:`mesh-gui`) or via the CLI, using the commands + presented in the :ref:`previous section `. + + .. csv-table:: + :header: "Key name", "Default value" + :widths: 70, 30 + + "carbonio-docs-connector/max-file-size-in-mb/document", "50" + "carbonio-docs-connector/max-file-size-in-mb/presentation", "100" + "carbonio-docs-connector/max-file-size-in-mb/spreadsheet", "10" diff --git a/source/_includes/_admincli/files-tables.rst b/source/_includes/_admincli/files-tables.rst new file mode 100644 index 000000000..9d1385cf6 --- /dev/null +++ b/source/_includes/_admincli/files-tables.rst @@ -0,0 +1,25 @@ +Using the |mesh| :ref:`kv interface `, it is possible to +change a few |file| parameters, according to the following table. + +.. csv-table:: + :header: "Key name", "Default value" + :widths: 70, 30 + + "carbonio-files/max-number-of-versions", "30" + "carbonio-files/max-uploadable-size-in-mb", "50" + "carbonio-files/max-downloadable-size-in-mb", "unset" + +#. The maximum number of versions stored for each supported file (text + and word processor documents, spreadsheets, presentations). You + can raise the default **30** number, but keep in mind that this + implies that you need more storage to keep all versions. + +#. The maximum size of a document, in megabytes, that can be + uploaded. By default, the value **is not defined**, meaning that + there is no limit to the size of a document. + +#. The maximum downloadable size of a document is by default **not + set**, meaning there every file can be downloaded. If a limit size + (in Megabytes) is set, trying to download a file larger than the + limit will result in a message being displayed, showing the current + size limit and the download will fail. diff --git a/source/_includes/_admincli/kv.rst b/source/_includes/_admincli/kv.rst new file mode 100644 index 000000000..4c8688a91 --- /dev/null +++ b/source/_includes/_admincli/kv.rst @@ -0,0 +1,27 @@ +Values can be changed by using, from any Node, the |mesh| kv +interface: you can access it using the :command:`consul` command from +the CLI. + +* To verify the current value of any key, use command + + .. code:: console + + # consul kv get -token="$CONSUL_TOKEN_PATH" "$KEY" + +* To modify one of the values reported in the tables below, use + command + + .. code:: console + + # consul kv put -token="$CONSUL_TOKEN_PATH" "$KEY" "$VALUE" + + When changing any of this values, they are immediately picked up by + the system, without the need to restart any services. + +In the commands, ``$CONSUL_TOKEN_PATH`` is the |mesh| bootstrap token +stored on the **Directory Service server**, while ``$KEY`` and +``$VALUE`` are the *key name* and the *new value*, respectively, as +written in the tables. + +.. hint:: The |mesh| bootstrap token can be retrieved using the + procedure described in section :ref:`mesh-token`. diff --git a/source/_includes/_admincli/mesh/agentnoudp.rst b/source/_includes/_admincli/mesh/agentnoudp.rst new file mode 100644 index 000000000..838ad1f43 --- /dev/null +++ b/source/_includes/_admincli/mesh/agentnoudp.rst @@ -0,0 +1,31 @@ +There are situations in which a service-discover agent fails to +connect to the service-discover server or to other agents using +**UDP**, but is successful via **TCP**. When this happens, in the +:file:`syslog` log file, warning messages like the following one are +recorded:: + + Mar 9 20:08:29 proxy01 service-discoverd[3189618]: 2025-03-09T20:08:29.578+0100 [WARN] agent.client.memberlist.lan: memberlist: Was able to connect to srv1.example.com over TCP but UDP probes failed, network may be misconfigured + + Mar 9 20:08:30 proxy01 service-discoverd[3189618]: 2025-03-09T20:08:30.579+0100 [WARN] agent.client.memberlist.lan: memberlist: Was able to connect to agent-mbox01.example.com over TCP but UDP probes failed, network may be misconfigured + + Mar 9 20:08:31 proxy01 service-discoverd[3189618]: 2025-03-09T20:08:31.580+0100 [WARN] agent.client.memberlist.lan: memberlist: Was able to connect to agent-files01.example.com over TCP but UDP probes failed, network may be misconfigured + +These messages show that **proxy01** (the agent that establishes the +communication) can not communicate with agents of ``srv1``, +``agent-mbox01``, ``agent-files01`` and are marked as ``[WARN]]``, +i.e., warnings, the agents can still communicate via TCP and the +service-discover is working--as shown also in the :ref:`mesh-gui`, +this is a symptom of a communication problem within the network. + +Possible reasons for the problem are: + +* A blocked (destination) UDP Port 8301 between the source agent, i.e + the agent starting the communication, and the destination agents or + server. + +* An unwanted SNAT rule of the agent's source IP address: if the UDP + connection is masqueraded with an IP address that is unknown to + |mesh| destination agent or server, then the connection fails + +In both cases, to fix the problem it is necessary to investigate the +firewall rules to find a misconfiguration or some offending rule. diff --git a/source/_includes/_ts/mesh.rst b/source/_includes/_admincli/mesh/commands.rst similarity index 67% rename from source/_includes/_ts/mesh.rst rename to source/_includes/_admincli/mesh/commands.rst index 4b87436fc..08f451291 100644 --- a/source/_includes/_ts/mesh.rst +++ b/source/_includes/_admincli/mesh/commands.rst @@ -1,56 +1,8 @@ -|mesh| is one of the main components of |product|, and is based on -HashiCorp's `Consul `_. This -page is meant to provide some of the most used CLI commands to inspect -and fix any issues that may arise with the use of Consul. -It is possible to interact with Consul on any node of a cluster but -remember that the :command:`consul` operates by default on the current -node. To operate on a different node, you need to explicitly specify -it, for example this command show all services running on node with -#ID *7ea9631e* +.. _consul-cluster-ops: - .. code:: console - - # consul catalog services -node 7ea9631e - -.. warning:: Some of the commands listed on this page can be used to - or modify significantly or remove a service or a node from Consul, - thus potentially disrupting |mesh|. These commands are marked with - an icon: :octicon:`alert-fill;1em;sd-text-danger` Use them with - care! - -.. _ts-token: - -Retrieve Token -============== - -Whenever you want to use Consul, the first operation is to retrieve -the *bootstrap-token*, to allow connection and interaction with the -service. - -.. code:: console - - # service-discover bootstrap-token - -.. hint:: You need to provide the cluster credential password, which - is stored in :file:`/var/lib/service-discover/password`. - -Export the token, which is a string similar to *e5a4966f-a83e-689d-618d-08a0fe7e695b* - -.. code:: console - - # export CONSUL_HTTP_TOKEN=e5a4966f-a83e-689d-618d-08a0fe7e695b - -You can automate the export process by using the following one-liner - -.. code:: console - - # export CONSUL_HTTP_TOKEN=$(gpg -qdo - /etc/zextras/service-discover/cluster-credentials.tar.gpg | tar xOf - consul-acl-secret.json | jq .SecretID -r) - -.. _ts-consul-cluster: - -Common Cluster Operations -========================= +Cluster Commands +================ The following commands are used to inspect a cluster: @@ -77,10 +29,10 @@ The following commands are used to inspect a cluster: # consul force-leave agent1-example-com -.. _ts-consul-services: +.. _consul-services-ops: -Common Service Operations -========================= +Service Commands +================ These commands allow to retrieve a list of services registered to a Consul cluster and to manipulate them. @@ -136,7 +88,7 @@ Consul cluster and to manipulate them. the case for |product|), simply delete the file and reload the agent on all nodes. -.. _ts-consul-other: +.. _consul-other-ops: Other Commands ============== diff --git a/source/_includes/_admincli/mesh/findleader.rst b/source/_includes/_admincli/mesh/findleader.rst new file mode 100644 index 000000000..c2d5a3c5b --- /dev/null +++ b/source/_includes/_admincli/mesh/findleader.rst @@ -0,0 +1,19 @@ +To find which |mesh| node is currently the *leader node*, first get the +|mesh| token. + +.. include:: /_includes/_admincli/mesh/gettoken.rst + +Query the |mesh| service to retrieve the state of all its Nodes. The +*leader node* has the attribute *State* set to **leader**. + +.. code:: console + + # consul operator raft list-peers + +The output of the command will be similar to the following. In this +case, the leader node is **srv2-example-com**:: + + Node ID Address State Voter RaftProtocol + srv1-example-com 10092f88-53cc-6938-08d3-48d112b5b25e 10.174.166.116:8300 follower true 3 + srv2-example-com 04033e5a-5597-20ca-81ef-5cdad4f24581 10.174.166.117:8300 leader true 3 + srv3-example-com 0d325666-f792-2258-a351-f74c01249fb3 10.174.166.118:8300 follower true 3 diff --git a/source/_includes/_admincli/mesh/gettoken.rst b/source/_includes/_admincli/mesh/gettoken.rst new file mode 100644 index 000000000..7ed1800ae --- /dev/null +++ b/source/_includes/_admincli/mesh/gettoken.rst @@ -0,0 +1,26 @@ +The token is encrypted and stored in file +:file:`/etc/zextras/service-discover/cluster-credentials.tar.gpg` and +can be retrieved with this command, which will output the token on the CLI + +.. code:: console + + # gpg -qdo - /etc/zextras/service-discover/cluster-credentials.tar.gpg | tar xOf - consul-acl-secret.json | jq .SecretID -r + +For simplicity you can put the token in a local variable as follows + +.. code:: console + + # export CONSUL_HTTP_TOKEN=$(gpg -qdo - /etc/zextras/service-discover/cluster-credentials.tar.gpg | tar xOf - consul-acl-secret.json | jq .SecretID -r) + +You can then check the password with command + +.. code:: console + + # echo $CONSUL_HTTP_TOKEN + +The password will remain in memory until you exit the CLI session, but +you can explicitly delete it using command + +.. code:: console + + # unset CONSUL_HTTP_TOKEN diff --git a/source/_includes/_admincli/mesh/intro.rst b/source/_includes/_admincli/mesh/intro.rst new file mode 100644 index 000000000..a3898b8ca --- /dev/null +++ b/source/_includes/_admincli/mesh/intro.rst @@ -0,0 +1,21 @@ +|mesh| is one of the main components of |product|, and is based on +HashiCorp's `Consul `_. This +page is meant to provide some of the most used CLI commands to inspect +and fix any issues that may arise with the use of Consul. + +It is possible to interact with Consul on any node of a cluster but +remember that the :command:`consul` operates by default on the current +node. To operate on a different node, you need to explicitly specify +it, for example this command show all services running on node with +#ID *7ea9631e* + +.. code:: console + + # consul catalog services -node 7ea9631e + +.. warning:: Some of the commands in this section can be used to + modify significantly or remove a service or a node from Consul, + thus potentially disrupting |mesh|, so use them with care! These + commands are marked with an icon + :octicon:`alert-fill;1em;sd-text-danger` + diff --git a/source/_includes/_admincli/mesh/missingleader.rst b/source/_includes/_admincli/mesh/missingleader.rst new file mode 100644 index 000000000..df7368ad0 --- /dev/null +++ b/source/_includes/_admincli/mesh/missingleader.rst @@ -0,0 +1,90 @@ +When a |Mesh| cluster falls and the election quorum is not met, you +may find a situation where no leader node exists and the following +error appears in the :file:`syslog` log file:: + + No cluster leader + + +In a case like this, it is possible to forcefully elect a node as the +new leader and restore the cluster's functionality following this +procedure. + +First, choose one of the |mesh| cluster's nodes that you want to be +the new leader. We call this node **newleader** in the remainder of +this procedure. + +On all |mesh| nodes, except for *newleader*, stop the +:command:`service-discover` service + +.. code:: console + + # systemctl status service-discover.service + + +On *newleader*, make a backup of :file:`peers.json` file: + +.. code:: console + + # cp /var/lib/service-discover/data/raft/peers.json /root/peers.json.bak + +Then, retrieve the ``id`` of the consul server + +.. code:: console + + # cat /var/lib/service-discover/data/node-id + +The output will be a string like:: + + 61f22310-97de-0965-4958-321840df66b6 + + +Use this string to create a new +:file:`/var/lib/service-discover/data/raft/peers.json` with the +following content:: + + { + "id": "", + "address": "`). + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + zextras$ zmcontrol status + + .. hint:: To see the status of only a service, use the new + systemd commands that replace the :command:`zmcontrol` + commands (see :ref:`systemd-targets`). + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + zextras$ zmcontrol status + + .. hint:: To see the status of only a service, use the new + systemd commands that replace the :command:`zmcontrol` + commands (see :ref:`systemd-targets`). If in the output some service appears as not running, start it. diff --git a/source/_includes/_upgrade/first-part-cb.rst b/source/_includes/_upgrade/first-part-cb.rst deleted file mode 100644 index 4beb8502a..000000000 --- a/source/_includes/_upgrade/first-part-cb.rst +++ /dev/null @@ -1,47 +0,0 @@ -Remember to start the upgrade from the Node featuring the Directory -Server, then all the other Nodes in the same order of installation. - -.. grid:: 1 1 1 2 - :gutter: 3 - - .. grid-item-card:: Step 1. Clean package list - :columns: 12 12 6 6 - - Clean cached package list, metadata, and information. - - .. tab-set:: - - .. tab-item:: Ubuntu - :sync: ubuntu - - .. code:: console - - # apt clean - - .. tab-item:: RHEL - :sync: rhel - - .. code:: console - - # dnf clean all - - .. grid-item-card:: Step 2. Upgrade Node - :columns: 12 12 6 6 - - Update package list. - - .. tab-set:: - - .. tab-item:: Ubuntu - :sync: ubuntu - - .. code:: console - - # apt update - - .. tab-item:: RHEL - :sync: rhel - - .. code:: console - - # dnf check-update diff --git a/source/_includes/_upgrade/first-part-ce.rst b/source/_includes/_upgrade/first-part-ce.rst deleted file mode 100644 index 6900e1bc7..000000000 --- a/source/_includes/_upgrade/first-part-ce.rst +++ /dev/null @@ -1,48 +0,0 @@ -If you are on a Multi-Server, remember to start from the Node -featuring the Directory Server Component, then all the other Nodes in the same -order of installation. - -.. grid:: 1 1 1 2 - :gutter: 3 - - .. grid-item-card:: Step 1. Clean package list - :columns: 12 12 6 6 - - Clean cached package list, metadata, and information. - - .. tab-set:: - - .. tab-item:: Ubuntu - :sync: ubuntu - - .. code:: console - - # apt clean - - .. tab-item:: RHEL - :sync: rhel - - .. code:: console - - # dnf clean all - - .. grid-item-card:: Step 2. Upgrade Node - :columns: 12 12 6 6 - - Update package list. - - .. tab-set:: - - .. tab-item:: Ubuntu - :sync: ubuntu - - .. code:: console - - # apt update - - .. tab-item:: RHEL - :sync: rhel - - .. code:: console - - # dnf check-update diff --git a/source/_includes/_upgrade/first-part.rst b/source/_includes/_upgrade/first-part.rst new file mode 100644 index 000000000..ac4e18336 --- /dev/null +++ b/source/_includes/_upgrade/first-part.rst @@ -0,0 +1,72 @@ +.. grid:: 1 1 1 2 + :gutter: 3 + + .. grid-item-card:: Step 1. Clean package list + :columns: 12 12 6 6 + + Clean cached package list, metadata, and information. + + .. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + .. code:: console + + # apt clean + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf clean all + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt clean + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf clean all + + .. grid-item-card:: Step 2. Update list of packages + :columns: 12 12 6 6 + + Update package list. + + .. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + .. code:: console + + # apt update + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf check-update + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt update + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf check-update diff --git a/source/_includes/_upgrade/issue-pg.rst b/source/_includes/_upgrade/issue-pg.rst index aad06ad8e..010079111 100644 --- a/source/_includes/_upgrade/issue-pg.rst +++ b/source/_includes/_upgrade/issue-pg.rst @@ -1,6 +1,4 @@ - -During the upgrade of PostgreSQL, an error might be raised in case the -existent databases have been created with older version of **libc**:: +You may encounter this error during the upgrade to PostgreSQL 16 if the existing databases were originally created with PostgreSQL 12 using an older version of the ``libc`` library. This can happen either directly during the PostgreSQL upgrade, or later - after a seemingly successful upgrade - when you upgrade the operating system from Ubuntu 20.04 to Ubuntu 22.04:: 2024-03-19 12:28:14.209 UTC [909825] HINT: Rebuild all objects in this database that use the default collation and run ALTER DATABASE activesync REFRESH COLLATION VERSION, or build PostgreSQL with the right library version. 2024-03-19 12:28:19.669 UTC [909915] WARNING: database "abq" has a collation version mismatch diff --git a/source/_includes/_upgrade/package-broker.rst b/source/_includes/_upgrade/package-broker.rst index 3684d208e..b959271c6 100644 --- a/source/_includes/_upgrade/package-broker.rst +++ b/source/_includes/_upgrade/package-broker.rst @@ -1,84 +1,132 @@ .. _broker-pkg: -.. card:: Installation of package :file:`carbonio-message-broker` +If you are upgrading from **24.9** versions or older, make sure +that the :file:`carbonio-message-broker` package is installed on +the :ref:`component-mesh-install` Node. - If you are upgrading from **24.9** versions or older, make sure - that the :file:`carbonio-message-broker` package is installed on - the :ref:`component-mesh-install` Node. +This situation was previously required by the +:ref:`component-wsc-install` Component, where it was installed, but +now is used by the whole |product|. - This situation was previously required by the - :ref:`component-wsc-install` Component, where it was installed, but - now is used by the whole |product|. +In case you already installed |WSC|, remove the +:file:`carbonio-message-broker` from the |wsc| Node, then +install it on the **Mesh & Directory Node**. - In case you already installed |WSC|, remove the - :file:`carbonio-message-broker` from the |wsc| Node, then - install it on the **Mesh & Directory Node**. +If you never installed |wsc|, make sure you install this package +on the **Mesh & Directory Node**. - If you never installed |wsc|, make sure you install this package - on the **Mesh & Directory Node**. +To verify if the package is installed, execute the following +command on the |wsc| (if installed) and the Mesh & Directory +Node. - To verify if the package is installed, execute the following - command on the |wsc| (if installed) and the Mesh & Directory - Node. +.. tab-set:: - .. tab-set:: + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 - .. tab-item:: Ubuntu - :sync: ubuntu + .. code:: console - .. code:: console + # dpkg -l carbonio-message-broker - # dpkg -l carbonio-message-broker + The output should be (version may vary):: - The output should be (version may vary):: + ii carbonio-message-broker 0.2.0-1jammy amd64 Carbonio message broker - ii carbonio-message-broker 0.2.0-1jammy amd64 Carbonio message broker + .. tab-item:: RHEL 8 + :sync: rhel8 - .. tab-item:: RHEL - :sync: rhel + .. code:: console - .. code:: console + # rpm -q carbonio-message-broker - # rpm -q carbonio-message-broker + The output should be (version may vary):: - The output should be (version may vary):: + carbonio-message-broker-0.2.0-1.el8.x86_64 - carbonio-message-broker-0.2.0-1.el8.x86_64 + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 - If the package is installed on the |wsc| Node (if you have it - installed), remove it. + .. code:: console - .. tab-set:: + # dpkg -l carbonio-message-broker - .. tab-item:: Ubuntu - :sync: ubuntu + The output should be (version may vary):: - .. code:: console + ii carbonio-message-broker 0.2.0-1jammy amd64 Carbonio message broker - # apt remove carbonio-message-broker + .. tab-item:: RHEL 9 + :sync: rhel9 - .. tab-item:: RHEL - :sync: rhel + .. code:: console - .. code:: console + # rpm -q carbonio-message-broker - # dnf remove carbonio-message-broker + The output should be (version may vary):: - If the package is **not** installed on the Mesh & Directory - Node, install it manually: + carbonio-message-broker-0.2.0-1.el8.x86_64 - .. tab-set:: +If the package is installed on the |wsc| Node (if you have it +installed), remove it. - .. tab-item:: Ubuntu - :sync: ubuntu +.. tab-set:: - .. code:: console + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 - # apt install carbonio-message-broker + .. code:: console - .. tab-item:: RHEL - :sync: rhel + # apt remove carbonio-message-broker - .. code:: console + .. tab-item:: RHEL 8 + :sync: rhel8 - # dnf install carbonio-message-broker + .. code:: console + + # dnf remove carbonio-message-broker + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt remove carbonio-message-broker + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf remove carbonio-message-broker + +If the package is **not** installed on the Mesh & Directory +Node, install it manually: + +.. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + .. code:: console + + # apt install carbonio-message-broker + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf install carbonio-message-broker + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt install carbonio-message-broker + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf install carbonio-message-broker diff --git a/source/_includes/_upgrade/package-catalog.rst b/source/_includes/_upgrade/package-catalog.rst index 4f3aae0d5..38595bba0 100644 --- a/source/_includes/_upgrade/package-catalog.rst +++ b/source/_includes/_upgrade/package-catalog.rst @@ -3,15 +3,29 @@ On the Node featuring the **Proxy** Component, install package .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 .. code:: console # apt install carbonio-catalog - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf install carbonio-catalog + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt install carbonio-catalog + + .. tab-item:: RHEL 9 + :sync: rhel9 .. code:: console diff --git a/source/_includes/_upgrade/package-dispatcher.rst b/source/_includes/_upgrade/package-dispatcher.rst new file mode 100644 index 000000000..e4957f7f8 --- /dev/null +++ b/source/_includes/_upgrade/package-dispatcher.rst @@ -0,0 +1,34 @@ + +.. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + .. code:: console + + # apt remove carbonio-message-dispatcher + # apt install carbonio-message-dispatcher-ce + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf remove carbonio-message-dispatcher + # dnf install carbonio-message-dispatcher-ce + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt remove carbonio-message-dispatcher + # apt install carbonio-message-dispatcher-ce + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf remove carbonio-message-dispatcher + # dnf install carbonio-message-dispatcher-ce diff --git a/source/_includes/_upgrade/package-storages.rst b/source/_includes/_upgrade/package-storages.rst index de009cacc..cec54f3fd 100644 --- a/source/_includes/_upgrade/package-storages.rst +++ b/source/_includes/_upgrade/package-storages.rst @@ -3,15 +3,29 @@ package ``carbonio-storages`` by executing command .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 .. code:: console # apt install carbonio-storages - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf install carbonio-storages + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt install carbonio-storages + + .. tab-item:: RHEL 9 + :sync: rhel9 .. code:: console diff --git a/source/_includes/_upgrade/package-um.rst b/source/_includes/_upgrade/package-um.rst index ed3ad0536..f1284a9a7 100644 --- a/source/_includes/_upgrade/package-um.rst +++ b/source/_includes/_upgrade/package-um.rst @@ -4,37 +4,64 @@ install it on the *Mesh & Directory* Node, execute as the |ru| .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 .. code:: console # apt install carbonio-user-management - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 8 + :sync: rhel8 .. code:: console # dnf install carbonio-user-management -While the user management features works even if the package is -installed with both Components, we suggest that you remove it from the Node -featuring the Proxy Component: + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt install carbonio-user-management + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf install carbonio-user-management + +While the user management works even if the package is installed with +both Components, we suggest that you remove it from the Node featuring +the Proxy Component: .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 .. code:: console # apt remove carbonio-user-management - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 8 + :sync: rhel8 .. code:: console # dnf remove carbonio-user-management + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt remove carbonio-user-management + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf remove carbonio-user-management diff --git a/source/_includes/_upgrade/second-part-cb.rst b/source/_includes/_upgrade/second-part-cb.rst index 2dab090d9..dd34ff8fd 100644 --- a/source/_includes/_upgrade/second-part-cb.rst +++ b/source/_includes/_upgrade/second-part-cb.rst @@ -8,15 +8,29 @@ .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 .. code:: console # apt upgrade - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf upgrade --best --allowerasing + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt upgrade + + .. tab-item:: RHEL 9 + :sync: rhel9 .. code:: console @@ -31,31 +45,46 @@ .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + .. code:: console + + # apt autoremove + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf autoremove + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 .. code:: console # apt autoremove - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 9 + :sync: rhel9 .. code:: console # dnf autoremove .. grid-item-card:: Step 6. Register upgraded packages to |mesh| - :columns: 12 12 6 6 + :columns: 6 6 6 6 .. code:: console # pending-setups -a .. grid-item-card:: Step 7. Reboot - :columns: 12 12 6 6 + :columns: 6 6 6 6 - Once the upgrade has completed successfully, run command: + Once the upgrade has completed successfully, make sure you + restart all services by running command: .. code:: console diff --git a/source/_includes/_upgrade/second-part-ce.rst b/source/_includes/_upgrade/second-part-ce.rst index 8c8927409..8878dc930 100644 --- a/source/_includes/_upgrade/second-part-ce.rst +++ b/source/_includes/_upgrade/second-part-ce.rst @@ -4,26 +4,40 @@ .. grid-item-card:: Step 4. Upgrade Node :columns: 12 12 12 12 - Update package list and install upgrades. + Install upgrades. .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 .. code:: console # apt upgrade - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 8 + :sync: rhel8 .. code:: console # dnf upgrade --best --allowerasing - + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt upgrade + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf upgrade --best --allowerasing + .. grid-item-card:: Step 5. (Optional) Remove unused packages - :columns: 12 12 6 6 + :columns: 12 12 12 12 After the latest packages have been installed, you can remove unused packages still installed on your system. If unsure, skip @@ -31,22 +45,36 @@ .. tab-set:: - .. tab-item:: Ubuntu - :sync: ubuntu + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 .. code:: console # apt autoremove - .. tab-item:: RHEL - :sync: rhel + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf autoremove + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt autoremove + + .. tab-item:: RHEL 9 + :sync: rhel9 .. code:: console # dnf autoremove .. grid-item-card:: Step 6. Register upgraded packages to |mesh| - :columns: 12 12 6 6 + :columns: 12 12 12 12 .. code:: console diff --git a/source/carbonio-ce/admincli/carboniodocs.rst b/source/carbonio-ce/admincli/carboniodocs.rst new file mode 100644 index 000000000..e262d659c --- /dev/null +++ b/source/carbonio-ce/admincli/carboniodocs.rst @@ -0,0 +1,31 @@ +.. _docs-file: + +|docs| and Files +================ + +This page contains a few tables that list the *Key names* and the +*default values* of some |docs| and |file| configuration values that +you can modify. You can take these tables into account in case you +want to revert some values to their default after some unsatisfactory +change. + +.. _modify-kv: + +How to Modify Values +-------------------- + +.. include:: /_includes/_admincli/kv.rst + +.. _docs-opt: + +Docs Configuration Tables +------------------------- + +.. include:: /_includes/_admincli/docs-tables.rst + +.. _files-opt: + +Files Configuration Tables +-------------------------- + +.. include:: /_includes/_admincli/files-tables.rst diff --git a/source/carbonio-ce/admincli/mesh/commands.rst b/source/carbonio-ce/admincli/mesh/commands.rst new file mode 100644 index 000000000..9b29fb178 --- /dev/null +++ b/source/carbonio-ce/admincli/mesh/commands.rst @@ -0,0 +1,27 @@ +.. _mesh-ops: + +=================== + Common Operations +=================== + +This section shows a few important commands used when working with +|mesh| clusters and services. + +Whenever executing a command using |mesh| interface, +:command:`consul`, the **bootstrap token** is required. Refer to +Section :ref:`mesh-token` to learn how to obtain it and how to deal +with it. + +.. index:: bootstrap token; retrieve +.. index:: Carbonio Mesh bootstrap token + +.. _mesh-token: + +Retrieve Bootstrap Token +======================== + +.. include:: /_includes/_admincli/mesh/gettoken.rst + +.. commands + +.. include:: /_includes/_admincli/mesh/commands.rst diff --git a/source/carbonio-ce/admincli/mesh/credentials.rst b/source/carbonio-ce/admincli/mesh/credentials.rst index d0e6f4a94..c07e6a749 100644 --- a/source/carbonio-ce/admincli/mesh/credentials.rst +++ b/source/carbonio-ce/admincli/mesh/credentials.rst @@ -70,7 +70,8 @@ On a Multi-Server, before starting the procedure it is necessary to identify the **Leader Node**, on which to carry out some preliminary tasks, then wipe the old secret, generate the new one, and finally set up the other nodes by copying the credentials on the remaining nodes -and restart the service. +and restart the service. Instructions to find the leader node can be found in Section +:ref:`mesh-find-leader`. Find Leader Node's IP Address ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/carbonio-ce/admincli/mesh/externalservices.rst b/source/carbonio-ce/admincli/mesh/externalservices.rst deleted file mode 120000 index a9ac985b1..000000000 --- a/source/carbonio-ce/admincli/mesh/externalservices.rst +++ /dev/null @@ -1 +0,0 @@ -../../../carbonio/admincli/mesh/externalservices.rst \ No newline at end of file diff --git a/source/carbonio-ce/admincli/mesh/leadernode.rst b/source/carbonio-ce/admincli/mesh/leadernode.rst new file mode 100644 index 000000000..83bff9107 --- /dev/null +++ b/source/carbonio-ce/admincli/mesh/leadernode.rst @@ -0,0 +1,18 @@ +========================== + Managing the Leader Node +========================== + +This section collects some useful how-tos that help in the management +of the Leader Node. + +.. _mesh-find-leader: + +Find the Leader Node +==================== + +.. include:: /_includes/_admincli/mesh/findleader.rst + +Missing Leader Node +=================== + +.. include:: /_includes/_admincli/mesh/missingleader.rst diff --git a/source/carbonio-ce/admincli/mesh/rejoin-cluster.rst b/source/carbonio-ce/admincli/mesh/rejoin-cluster.rst new file mode 100644 index 000000000..95775e781 --- /dev/null +++ b/source/carbonio-ce/admincli/mesh/rejoin-cluster.rst @@ -0,0 +1,46 @@ +======================= + Rejoin |mesh| Cluster +======================= + +When a member of a |mesh| cluster stays offline for a longer time than +the value of ``server_rejoin_age_max``, it will be unable to rejoin the cluster +cluster and the following error appears in syslog:: + + refusing to rejoin cluster because server has been offline for more than the configured server_rejoin_age_max + +.. hint:: You can use the command :command:`journalctl -u + service-discover.service -f` to check the log and see new log + message as they are produced, in real time. + +A typical scenario for this error is when the Node, on which the +member is installed, is restored from an old snapshot. + +A viable solution is quite easy and requires a few commands from +the CLI. + +First, delete the following file. + +.. code:: console + + # rm /var/lib/service-discover/data/server_metadata.json + +Then restart the ``service-discover`` daemon + +.. code:: console + + # systemctl restart service-discovery + +If the command is successful, this message appears in syslog:: + + Join cluster completed. + +To make sure that the |mesh| agent is synchronised with the other +members, issue the following commands. + +.. code:: console + + # consul members + +.. code:: console + + # consul catalog services diff --git a/source/carbonio-ce/admincli/toc.rst b/source/carbonio-ce/admincli/toc.rst index 71dcb72b7..969045c4e 100644 --- a/source/carbonio-ce/admincli/toc.rst +++ b/source/carbonio-ce/admincli/toc.rst @@ -15,6 +15,7 @@ mandatory. :maxdepth: 1 ldap + carboniodocs management advancedadmin mesh diff --git a/source/carbonio-ce/install/components.rst b/source/carbonio-ce/architecture/components.rst similarity index 89% rename from source/carbonio-ce/install/components.rst rename to source/carbonio-ce/architecture/components.rst index aa7c244ac..6f2828766 100644 --- a/source/carbonio-ce/install/components.rst +++ b/source/carbonio-ce/architecture/components.rst @@ -34,17 +34,16 @@ consists of one or more packages. The Components of |product| =========================== -We can group the available |product| Components into 3 macro categories: - -* **Infrastructure Components**, which are mandatory in any |product| installation +This is the list of Components that make up a |product| installation. -* **Service Components** broaden the functionality provided by |product| +When installing the Components, be careful that: -This is the list of Components that make up a |product| installation. +* Postgres, Grafana, Zookeper, and Kafka are third-party software that + are installed from their respective official repositories -.. note:: Postgres, Grafana, Zookeper, and Kafka are third-party - software that are installed from their respective official - repositories. +* On each Node, you need to install package + ``service-discover-agent``, except on the Node on which + ``service-discover-server`` is installed .. grid:: 1 1 2 2 :gutter: 3 @@ -79,7 +78,6 @@ This is the list of Components that make up a |product| installation. * carbonio-tasks-db * carbonio-message-dispatcher-db * carbonio-ws-collaboration-db - * service-discover-agent .. grid-item-card:: Proxy :columns: 6 @@ -96,8 +94,6 @@ This is the list of Components that make up a |product| installation. * carbonio-tasks-ui * carbonio-ws-collaboration-ui * carbonio-files-public-folder-ui - * carbonio-search-ui - * service-discover-agent * carbonio-catalog .. grid-item-card:: MTA AV/AS @@ -110,7 +106,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-mta - * service-discover-agent .. grid-item-card:: Mailstore & Provisioning :columns: 6 @@ -123,7 +118,6 @@ This is the list of Components that make up a |product| installation. * carbonio-appserver * carbonio-storages-ce - * service-discover-agent .. grid-item-card:: Files :columns: 6 @@ -135,7 +129,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-files-ce - * service-discover-agent .. grid-item-card:: Docs & Editor :columns: 6 @@ -148,7 +141,6 @@ This is the list of Components that make up a |product| installation. * carbonio-docs-connector-ce * carbonio-docs-editor - * service-discover-agent .. grid-item-card:: Preview :columns: 6 @@ -160,7 +152,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-preview-ce - * service-discover-agent .. grid-item-card:: Tasks :columns: 6 @@ -172,7 +163,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-tasks-ce - * service-discover-agent .. grid-item-card:: |wsc| :columns: 6 @@ -185,7 +175,6 @@ This is the list of Components that make up a |product| installation. * carbonio-message-dispatcher-ce * carbonio-ws-collaboration-ce - * service-discover-agent .. grid-item-card:: Video Server :columns: 6 @@ -197,7 +186,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-videoserver-ce - * service-discover-agent .. grid-item-card:: Monitoring :columns: 6 @@ -210,8 +198,6 @@ This is the list of Components that make up a |product| installation. * carbonio-prometheus * grafana - * service-discover-agent - .. _multiserver-installation: diff --git a/source/carbonio-ce/install/components/component-chats.rst b/source/carbonio-ce/architecture/components/component-chats.rst similarity index 100% rename from source/carbonio-ce/install/components/component-chats.rst rename to source/carbonio-ce/architecture/components/component-chats.rst diff --git a/source/carbonio-ce/install/components/component-db.rst b/source/carbonio-ce/architecture/components/component-db.rst similarity index 100% rename from source/carbonio-ce/install/components/component-db.rst rename to source/carbonio-ce/architecture/components/component-db.rst diff --git a/source/carbonio-ce/install/components/component-docs.rst b/source/carbonio-ce/architecture/components/component-docs.rst similarity index 100% rename from source/carbonio-ce/install/components/component-docs.rst rename to source/carbonio-ce/architecture/components/component-docs.rst diff --git a/source/carbonio-ce/install/components/component-files.rst b/source/carbonio-ce/architecture/components/component-files.rst similarity index 100% rename from source/carbonio-ce/install/components/component-files.rst rename to source/carbonio-ce/architecture/components/component-files.rst diff --git a/source/carbonio-ce/install/components/component-mailstore-provisioning.rst b/source/carbonio-ce/architecture/components/component-mailstore-provisioning.rst similarity index 100% rename from source/carbonio-ce/install/components/component-mailstore-provisioning.rst rename to source/carbonio-ce/architecture/components/component-mailstore-provisioning.rst diff --git a/source/carbonio-ce/install/components/component-mesh-ds.rst b/source/carbonio-ce/architecture/components/component-mesh-ds.rst similarity index 100% rename from source/carbonio-ce/install/components/component-mesh-ds.rst rename to source/carbonio-ce/architecture/components/component-mesh-ds.rst diff --git a/source/carbonio-ce/install/components/component-monit.rst b/source/carbonio-ce/architecture/components/component-monit.rst similarity index 100% rename from source/carbonio-ce/install/components/component-monit.rst rename to source/carbonio-ce/architecture/components/component-monit.rst diff --git a/source/carbonio-ce/install/components/component-mta.rst b/source/carbonio-ce/architecture/components/component-mta.rst similarity index 100% rename from source/carbonio-ce/install/components/component-mta.rst rename to source/carbonio-ce/architecture/components/component-mta.rst diff --git a/source/carbonio-ce/install/components/component-preview.rst b/source/carbonio-ce/architecture/components/component-preview.rst similarity index 66% rename from source/carbonio-ce/install/components/component-preview.rst rename to source/carbonio-ce/architecture/components/component-preview.rst index 3200f9ab2..910c7fc0a 100644 --- a/source/carbonio-ce/install/components/component-preview.rst +++ b/source/carbonio-ce/architecture/components/component-preview.rst @@ -35,16 +35,3 @@ Pending setups -------------- .. include:: /_includes/_installation/pset.rst - -.. _conf-memcached: - -Configure Memcached -------------------- - -.. include:: /_includes/_installation/_components/memcached.rst - -.. note:: In case you have multiple Mailstore & Provisioning or Proxy Nodes, - add all of them as a comma-separated list, for example:: - - nginx_lookup_server_full_path_urls = https://172.16.0.13:7072,https://172.16.0.23:7072 - memcached_server_full_path_urls = 172.16.0.12:11211,172.16.0.22:11211 diff --git a/source/carbonio-ce/install/components/component-proxy.rst b/source/carbonio-ce/architecture/components/component-proxy.rst similarity index 100% rename from source/carbonio-ce/install/components/component-proxy.rst rename to source/carbonio-ce/architecture/components/component-proxy.rst diff --git a/source/carbonio-ce/install/components/component-tasks.rst b/source/carbonio-ce/architecture/components/component-tasks.rst similarity index 100% rename from source/carbonio-ce/install/components/component-tasks.rst rename to source/carbonio-ce/architecture/components/component-tasks.rst diff --git a/source/carbonio-ce/install/components/component-vs.rst b/source/carbonio-ce/architecture/components/component-vs.rst similarity index 100% rename from source/carbonio-ce/install/components/component-vs.rst rename to source/carbonio-ce/architecture/components/component-vs.rst diff --git a/source/carbonio-ce/install/intro-systemd.rst b/source/carbonio-ce/architecture/intro-systemd.rst similarity index 100% rename from source/carbonio-ce/install/intro-systemd.rst rename to source/carbonio-ce/architecture/intro-systemd.rst diff --git a/source/carbonio-ce/install/systemd/adminguide.rst b/source/carbonio-ce/architecture/systemd/adminguide.rst similarity index 87% rename from source/carbonio-ce/install/systemd/adminguide.rst rename to source/carbonio-ce/architecture/systemd/adminguide.rst index de75dca21..765001f06 100644 --- a/source/carbonio-ce/install/systemd/adminguide.rst +++ b/source/carbonio-ce/architecture/systemd/adminguide.rst @@ -1,3 +1,5 @@ +.. _systemd-guide: + ``Systemd`` Usage Guide For Administrators ========================================== diff --git a/source/carbonio-ce/install/systemd/targets.rst b/source/carbonio-ce/architecture/systemd/targets.rst similarity index 84% rename from source/carbonio-ce/install/systemd/targets.rst rename to source/carbonio-ce/architecture/systemd/targets.rst index 159381dfa..cdba85e48 100644 --- a/source/carbonio-ce/install/systemd/targets.rst +++ b/source/carbonio-ce/architecture/systemd/targets.rst @@ -1,3 +1,5 @@ +.. _systemd-targets: + Carbonio ``Systemd`` Targets ============================ diff --git a/source/carbonio-ce/install/architecture.rst b/source/carbonio-ce/architecture/toc.rst similarity index 100% rename from source/carbonio-ce/install/architecture.rst rename to source/carbonio-ce/architecture/toc.rst diff --git a/source/carbonio-ce/conf.py b/source/carbonio-ce/conf.py index 9f8d34fc6..97b6be2ea 100644 --- a/source/carbonio-ce/conf.py +++ b/source/carbonio-ce/conf.py @@ -30,7 +30,7 @@ author = 'The Zextras Team' # The full version, including alpha/beta/rc tags -release = '25.6.0' +release = '25.9.0' version = release # -- General configuration --------------------------------------------------- diff --git a/source/carbonio-ce/index.rst b/source/carbonio-ce/index.rst index 26edd6e9f..dc8c8803c 100644 --- a/source/carbonio-ce/index.rst +++ b/source/carbonio-ce/index.rst @@ -48,14 +48,27 @@ The content is organised in multiple parts: upgrade/toc + .. grid-item-card:: Architecture + :columns: 12 12 6 6 + :class-title: sd-font-weight-bold sd-fs-4 + :link-type: doc + :link: architecture/toc + + Architecture of |product| and Components description and installation + + .. toctree:: + :hidden: + + architecture/toc + .. grid-item-card:: Install :columns: 12 12 6 6 :class-title: sd-font-weight-bold sd-fs-4 :link-type: doc :link: install/toc - Information on |product| and its architecture, installation and - upgrade instructions, security tips + Information on |product| and its installation, requirements and some + pre-cooked installation Scenarios .. toctree:: :hidden: diff --git a/source/carbonio-ce/install/requirements.rst b/source/carbonio-ce/install/requirements.rst index 7ed6a90fb..fa3ea3509 100644 --- a/source/carbonio-ce/install/requirements.rst +++ b/source/carbonio-ce/install/requirements.rst @@ -198,14 +198,6 @@ Furthermore, ports in Internal and External connections are grouped according to the Component that require them, so all ports listed in a table must be opened only on the Node on which the Component is installed. -.. card:: Outgoing Traffic - - Carbonio requires no specific ports to communicate with the - Internet (outgoing traffic), unless you want push notifications to - be sent to mobile devices. In this case, the Node installing the - Mailstore & Provisioning Component must be able to communicate with the - URL **https://notifications.zextras.com/firebase/** on port **443**. - .. _fw-external: External Connections @@ -286,7 +278,7 @@ corresponding Component is installed, for a proper communication among used by |mesh| for message broadcasting and membership management. -.. card:: Postgres Component +.. card:: Database Component .. csv-table:: :header: "Port", "Protocol", "Service" @@ -382,5 +374,5 @@ corresponding Component is installed, for a proper communication among :widths: 10 10 80 "prometheus", "TCP", "9090" - "prometheus SSH", "TCP", "9090" + "prometheus SSH", "TCP", "9999" diff --git a/source/carbonio-ce/install/scenarios/single-server-scenario.rst b/source/carbonio-ce/install/scenarios/single-server-scenario.rst index 1022fcc37..91fc26182 100644 --- a/source/carbonio-ce/install/scenarios/single-server-scenario.rst +++ b/source/carbonio-ce/install/scenarios/single-server-scenario.rst @@ -4,6 +4,18 @@ Single-Server Installation ============================ + +Architecture +============ + +The architecture of this scenario is depicted in the following diagram. + +.. _fig-single: + +.. figure:: /img/carbonio/scenario-single-server-CE.png + :width: 70% + :align: center + .. _single-install-auto: Automatic Script-based Installation @@ -165,6 +177,8 @@ Step 7: Setup |mesh| .. include:: /_includes/_installation/mesh.rst +.. include:: /_includes/_installation/pset.rst + .. _installation-step8: Step 8: Bootstrap |file| Databases @@ -174,7 +188,7 @@ Step 8: Bootstrap |file| Databases .. include:: /_includes/_installation/_steps/db-bootstrap-chats-ce.rst -.. include:: /_includes/_installation/_steps/chats-migration.rst +.. include:: /_includes/_installation/_steps/chats-migration-single-server-ce.rst .. include:: /_includes/_installation/complete.rst diff --git a/source/carbonio-ce/install/toc.rst b/source/carbonio-ce/install/toc.rst index d4855a601..6b9203e1c 100644 --- a/source/carbonio-ce/install/toc.rst +++ b/source/carbonio-ce/install/toc.rst @@ -14,9 +14,6 @@ section. .. toctree:: :maxdepth: 1 - architecture - intro-systemd requirements preliminary - components scenarios diff --git a/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_rhel.sh b/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_rhel.sh index c3a6f0c82..73d61f1f2 100644 --- a/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_rhel.sh +++ b/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_rhel.sh @@ -12,7 +12,7 @@ IP=$(hostname -i); echo "Carbonio will be installed on ${HOST}, using ${DOMAIN} as default domain and ${IP} as public IP" -echo "Selinux will be set to ENFORCE" +echo "Selinux will be set to PERMISSIVE" echo -e "SELINUX=permissive \nSELINUXTYPE=targeted \n" > /etc/selinux/config getenforce @@ -67,9 +67,9 @@ dnf install -y $PACKAGES pending-setups --execute-all PGPASSWORD=$POSTGRES_SECRET carbonio-message-dispatcher-db-bootstrap carbonio_adm 127.0.0.1 -PACKAGES="carbonio-message-dispatcher" +PACKAGES="carbonio-message-dispatcher-ce" dnf install -y $PACKAGES -PGPASSWORD=$POSTGRES_SECRET carbonio-message-dispatcher-migration carbonio_adm 127.0.0.1 20000 +PGPASSWORD=$POSTGRES_SECRET carbonio-message-dispatcher-migration carbonio_adm 127.0.0.1 dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm diff --git a/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_ubuntu.sh b/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_ubuntu.sh index 59835d690..6846a69aa 100644 --- a/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_ubuntu.sh +++ b/source/carbonio-ce/scripts/install_carbonio_ce_singleserver_ubuntu.sh @@ -60,9 +60,9 @@ apt install -y $PACKAGES pending-setups --execute-all PGPASSWORD=$POSTGRES_SECRET carbonio-message-dispatcher-db-bootstrap carbonio_adm 127.0.0.1 -PACKAGES="carbonio-message-dispatcher" +PACKAGES="carbonio-message-dispatcher-ce" apt install -y $PACKAGES -PGPASSWORD=$POSTGRES_SECRET carbonio-message-dispatcher-migration carbonio_adm 127.0.0.1 20000 +PGPASSWORD=$POSTGRES_SECRET carbonio-message-dispatcher-migration carbonio_adm 127.0.0.1 PACKAGES="carbonio-videoserver-ce" apt install -y $PACKAGES diff --git a/source/carbonio-ce/troubleshooting/mesh.rst b/source/carbonio-ce/troubleshooting/mesh.rst deleted file mode 100644 index 599e4793a..000000000 --- a/source/carbonio-ce/troubleshooting/mesh.rst +++ /dev/null @@ -1,8 +0,0 @@ - -.. _ts-mesh: - -======================== - |mesh| -======================== - -.. include:: /_includes/_ts/mesh.rst diff --git a/source/carbonio-ce/troubleshooting/toc.rst b/source/carbonio-ce/troubleshooting/toc.rst index 05ceb562c..5b7b727ec 100644 --- a/source/carbonio-ce/troubleshooting/toc.rst +++ b/source/carbonio-ce/troubleshooting/toc.rst @@ -28,19 +28,6 @@ of them. upgrade - .. grid-item-card:: |mesh| - :columns: 12 12 6 6 - :class-title: sd-font-weight-bold sd-fs-4 - :link-type: ref - :link: ts-mesh - - |mesh| problems - - .. toctree:: - :hidden: - - mesh - .. grid-item-card:: Directory Server :columns: 12 12 6 6 :class-title: sd-font-weight-bold sd-fs-4 diff --git a/source/carbonio-ce/upgrade/upgrade-older.rst b/source/carbonio-ce/upgrade/upgrade-older.rst index ab57a6589..63a53a55e 100644 --- a/source/carbonio-ce/upgrade/upgrade-older.rst +++ b/source/carbonio-ce/upgrade/upgrade-older.rst @@ -66,7 +66,11 @@ Upgrade |product| .. include:: /_includes/_upgrade/ds.rst -.. include:: /_includes/_upgrade/first-part-ce.rst +If you are on a Multi-Server, remember to start from the Node +featuring the Directory Server Component, then all the other Nodes in the same +order of installation. + +.. include:: /_includes/_upgrade/first-part.rst .. grid:: 1 1 1 2 :gutter: 3 @@ -99,11 +103,8 @@ Upgrade |product| package ``carbonio-message-dispatcher`` and install ``carbonio-message-dispatcher-ce``. - .. code:: console - - # apt remove carbonio-message-dispatcher - # apt install carbonio-message-dispatcher-ce - + .. include:: /_includes/_upgrade/package-dispatcher.rst + .. include:: /_includes/_upgrade/second-part-ce.rst Other Upgrades diff --git a/source/carbonio-ce/upgrade/upgrade.rst b/source/carbonio-ce/upgrade/upgrade.rst index d0bcb3c75..7b560ebb0 100644 --- a/source/carbonio-ce/upgrade/upgrade.rst +++ b/source/carbonio-ce/upgrade/upgrade.rst @@ -75,8 +75,11 @@ Upgrade Nodes .. include:: /_includes/_upgrade/ds.rst +If you are on a Multi-Server, remember to start from the Node +featuring the Directory Server Component, then all the other Nodes in the same +order of installation. -.. include:: /_includes/_upgrade/first-part-ce.rst +.. include:: /_includes/_upgrade/first-part.rst .. grid:: 1 1 1 2 :gutter: 3 diff --git a/source/carbonio/admincli/activereplica.rst b/source/carbonio/admincli/activereplica.rst index 1e48ec5e1..bbcee8304 100644 --- a/source/carbonio/admincli/activereplica.rst +++ b/source/carbonio/admincli/activereplica.rst @@ -1,8 +1,8 @@ .. _activereplica: -=============== - |carbonio| HA -=============== +================= + |carbonio| |ur| +================= The |product| architecture is mostly based on services that make nodes *stateless*, redundant, and clustered *by design*. The only @@ -10,20 +10,19 @@ The |product| architecture is mostly based on services that make nodes it plays in storing metadata, binary blobs, and connection cache. While this situation could represent a *single point of failure*, a -replica mechanism |product| can be added, that drastically increases +replication mechanism |product| can be added, that drastically increases the availability of the Mailstore service. How it works ============ -**Active Replica** is the foundation of the |ha| mechanism described -above, which is an account-based, real-time replication mechanism that -allows |product| to keep multiple instances of a mailbox within -different Mailstores. +|ur| is the foundation of the mechanism described above, which is an +account-based, real-time replication mechanism that allows |product| +to keep multiple instances of a mailbox within different Mailstores. -The Replica part is in charge of encoding and transmitting all the +The |ur| part is in charge of encoding and transmitting all the transactions of the account to an :ref:`event-streaming queue -`. Once processed by the Replica, the events are +`. Once processed by the |ur|, the events are consumed by one agent, or even by multiple agents, in the destination Mailstore. *Active* means that the destination Mailstores are **active Nodes**, reducing the need for dedicated resources that store @@ -31,20 +30,20 @@ the passive node of the clusters. This also improves the overall performance of the promotion stage, since the service is already up and running. -Active Replica Requirements -=========================== +|ur| Requirements +================= There are **two requirements** to satisfy to be able to install the -Active Replica. +|ur|. -#. The |product| subscription must include the HA module. The HA is +#. The |product| subscription must include the |ur| module. The |ur| is licensed “for enabled accounts”. The license can be verified with command .. code:: console zextras$ carbonio core getLicenseInfo | grep -e ZxHA -e ha_basic -A2 - ZxHA + ZxHA quantity 1000 licensed true -- @@ -55,13 +54,12 @@ Active Replica. #. All the primary volumes of the mailbox **must be configured** as :ref:`Centralized Storage `. -Enabling Active Replica -======================= +Enabling |ur| +============= -To enable Active Replica you need to configure the endpoints of all -the streamer nodes, using either their IPs or FQDNs, which are -supposed to expose port **9092** reachable from each of the other -Mailstores. +To enable |ur| you need to configure the endpoints of all the streamer +nodes, using either their IPs or FQDNs, which are supposed to expose +port **9092** reachable from each of the other Mailstores. .. card:: Example @@ -87,52 +85,54 @@ To verify that the settings have been applied and the service operates correctly, you can use the commands presented in section :ref:`ar-ts` below. -Active Replica Usage -==================== +|ur| Usage +========== A number of CLI commands can be used to carry out routine operations -with the Active Replica: :ref:`initialise `, :ref:`monitor +with the |ur|: :ref:`initialise `, :ref:`monitor `, :ref:`promote `, and :ref:`delete ` a -Replica. +|ur|. Limitations of the Commands -~~~~~~~~~~~~~~~~~~~~~~~~~~~ +--------------------------- The command presented in this section **do not support**: * regular expressions in the account name: ``john.doe@example.com`` is supported, while ``john*@example.com`` or ``?ohn@example.com`` are not -* distribution lists +* distribution lists .. _ar-init: -Replica Initialisation +|ur| Initialisation ---------------------- To replicate a mailbox to another Mailstore you can use the :command:`setAccountDestination` command, which needs as parameters * the destination Mailstore's FDQN (e.g., *mailstore1.example.com*) -* the priority of the nodes. This information can be used in case the same - account has been replicated more than once, to identify the first to - be used. A lower value means a higher priority (e.g., a Replica with - value *10* has a higher priority than Replicas with values *11*, - *20*, or *100*) + +* the priority of the nodes. This information can be used in case the + same account has been replicated more than once, to identify the + first to be used. A lower value means a higher priority (e.g., a + |ur| with value *10* has a higher priority than |ur|\s with values + *11*, *20*, or *100*) + * the account to replicate. Multiple accounts are also available, either comma separated on the command line or from an input file, with one account per line. In the remainder, we call this file :file:`/tmp/accounts`, which consists of two lines: - + | john.doe@example.com | jane.doe@example.com Example of valid commands are: .. code:: console - - zextras$ carbonio ha setAccountDestination mailstore1.example.com 10 accounts user1@customer.tld,user2@customer.tld + + zextras$ carbonio ha setAccountDestination mailstore1.example.com 10 accounts user1@customer.tld,user2@customer.tld .. code:: console @@ -140,41 +140,41 @@ Example of valid commands are: zextras$ carbonio ha setAccountDestination mailstore1.example.com 10 input_file /tmp/accounts The Global Administrator will receive a notification as soon as the -replica initialisation is completed. +|ur| initialisation is completed. .. _ar-monit: -Replica Monitoring +|ur| Monitoring ------------------ -To monitor the status of a replica, you can use the +To monitor the status of a |ur|, you can use the :command:`getAccountStatus` command and refine the output by providing either of the following parameters: * ``mailHost``, to verify the status of all the replicated accounts active in the *source mailstore* - + * ``replicaServer``, to verify the status of all the accounts replicated on a *specific mailstore* - + * ``accounts``, to limit the list to a (comma separated) subset of *accounts* - + * ``domains``, to limit the list to all the replicated accounts of one ore more (comma separated) domains - + * ``accountStatus``, to list only accounts with active or paused replica on the *source Mailstore* - + * ``replicaStatus``, to list only accounts with available or unavailable replica on the *destination Mailstore* - + Without any parameter, the command will show the status of all the -accounts configured for the Replica. For each account, the output +accounts configured for the |ur|. For each account, the output reports: -.. code:: - +.. code:: + accountId eg. 9e94f5e0-8e0d-4f61-93aa-00747ac3dba6 accountName eg. user@demo.zextras.io accountMailHost eg. mbox1.demo.zextras.io @@ -186,7 +186,7 @@ reports: Then, for each replica: .. code:: - + replicas accountId eg. 9e94f5e0-8e0d-4f61-93aa-00747ac3dba6 itemId value of highest itemId in the local MariaDB (on the replica) @@ -199,11 +199,11 @@ Then, for each replica: .. _ar-promo: -Replica Promotion ------------------ +|ur| Promotion +-------------- -The architecture of Active Replica allows for a quick promotion of a -replica at any time. Indeed, since all the metadata are synchronously +The architecture of |ur| allows for a quick promotion of a replica +Node at any time. Indeed, since all the metadata are synchronously replicated in the event queue and the blobs are stored in the centralised volume, the Administrator can trigger the promotion even if the source Mailstore is offline (e.g., the Mailstore is in @@ -214,10 +214,10 @@ To promote an account, Administrators can use the :command:`promoteAccounts` command and refine the output by providing either of the following parameters: -* ``accounts``, to promote one or more (comma separated) accounts, - using the first replica (lowest priority) +* ``accounts``, to promote one or more (comma separated) accounts, + using the first |ur| (lowest priority) * ``input_file``, to promote accounts for a file (one per line), using - the first replica (lowest priority) + the first |ur| (lowest priority) * ``source_mail_host``, to promote all the accounts hosted by a specific Mailstore @@ -230,23 +230,23 @@ Example of valid commands are: zextras$ carbonio ha promoteAccounts accounts alice.doe@example.com,bob.doe@example.com * Promote accounts stored in a file - + .. code:: console zextras$ carbonio ha promoteAccounts input_file /tmp/accounts * Promote all accounts on a mailstore - + .. code:: console zextras$ carbonio ha promoteAccounts source_mail_host mbox1.example.com -Global Admin will receive a notification as soon as the replica promotion is completed. - +Global Admin will receive a notification as soon as the |ur| promotion is completed. + .. _ar-del: -Replica Deletion ----------------- +|ur| Deletion +------------- The Administrator can delete the replicated metadata anytime, using the :command:`removeAccountDestination` command, by providing either @@ -257,4 +257,3 @@ of the following parameters: * ``accounts``, also multiple (comma separated) accounts or an input file (with multiple accounts, one per line), to specify which account metadata must be deleted - diff --git a/source/carbonio/admincli/administration/changehostname.rst b/source/carbonio/admincli/administration/changehostname.rst index 9d6b0e94f..4d6f104b8 100644 --- a/source/carbonio/admincli/administration/changehostname.rst +++ b/source/carbonio/admincli/administration/changehostname.rst @@ -136,10 +136,10 @@ Tasks for the Directory Replica Component If your infrastructure features a **Directory Replica**, you need to carry out these tasks **on all Nodes**. Depending if you change -hostname on the *Mesh & Directory* Node only or also on the Replica, +hostname on the *Mesh & Directory* Node only or also on the Directory Replica, the commands to execute slightly differ. -#. If you change hostname **only the Mesh & Directory**, execute +#. If you change hostname **only to the Mesh & Directory**, execute .. code:: console @@ -167,10 +167,10 @@ In both cases, when you executed the commands, restart all services data integrity * ``zimbraLdapURL`` ensures that the system can still authenticate - users and access LDAP data even if one of the replicas is down. + users and access LDAP (|ds|) data even if one of the Directory + Replicas is down. - In a Single-Server |product| setup, these values are typically the - same. In a Multi-Server setup with LDAP replication, they will + same. In a Multi-Server setup with Directory Replica, they will differ, with ``zimbraLdapURL`` listing all replicas and ``zimbraLdapMasterURL`` pointing only to the master. diff --git a/source/carbonio/admincli/administration/changeip.rst b/source/carbonio/admincli/administration/changeip.rst index 7d2eb7922..553f8f171 100644 --- a/source/carbonio/admincli/administration/changeip.rst +++ b/source/carbonio/admincli/administration/changeip.rst @@ -42,8 +42,8 @@ configuration, keeping in mind that: follow the instructions in both Sections, :ref:`ip-change-net` and :ref:`ip-change-mta` -In all cases, execute also the tasks listed in Sections -:ref:`ip-change-pv` and :ref:`ip-change-vs`. +In all cases, execute also the tasks listed in Section +:ref:`ip-change-vs`. .. _ip-change-net: @@ -118,38 +118,38 @@ Finally, Restart |product| zextras$ zmcontrol restart +.. + .. _ip-change-pv: -.. _ip-change-pv: + Modify Preview Component Configuration + -------------------------------------- -Modify Preview Component Configuration --------------------------------------- + Edit file :file:`/etc/carbonio/preview/config.ini` and replace the + values of variables **nginx_lookup_servers_full_path_urls** and + **memcached_server_full_path_urls** with the new IP address + (**192.168.10.50**) ones. -Edit file :file:`/etc/carbonio/preview/config.ini` and replace the -values of variables **nginx_lookup_servers_full_path_urls** and -**memcached_server_full_path_urls** with the new IP address -(**192.168.10.50**) ones. + .. code-block:: ini -.. code-block:: ini - - nginx_lookup_server_full_path_urls = https://192.168.10.50:7072 - memcached_server_full_path_urls = 192.168.10.50:11211 + nginx_lookup_server_full_path_urls = https://192.168.10.50:7072 + memcached_server_full_path_urls = 192.168.10.50:11211 -In case you have multiple Proxy Nodes, add the IP addresses of all -Proxy Nodes as a comma-separated list, for example (assuming -**192.168.10.51** is the second Proxy Node's IP). + In case you have multiple Proxy Nodes, add the IP addresses of all + Proxy Nodes as a comma-separated list, for example (assuming + **192.168.10.51** is the second Proxy Node's IP). -.. note:: In case you have a Multi-Server infrastructure, replace the - 192.168.10.50 IP address in the snippets below with the correct IP - addresses, corresponding to the Proxy Node's IP address(es). + .. note:: In case you have a Multi-Server infrastructure, replace the + 192.168.10.50 IP address in the snippets below with the correct IP + addresses, corresponding to the Proxy Node's IP address(es). -.. code-block:: ini + .. code-block:: ini - nginx_lookup_server_full_path_urls = https://192.168.10.50:7072,https://192.168.10.51:7072 - memcached_server_full_path_urls = 192.168.10.50:11211,192.168.10.51:11211 + nginx_lookup_server_full_path_urls = https://192.168.10.50:7072,https://192.168.10.51:7072 + memcached_server_full_path_urls = 192.168.10.50:11211,192.168.10.51:11211 -.. seealso:: + .. seealso:: - More information in Section :ref:`conf-memcached` + More information in Section :ref:`conf-memcached` .. _ip-change-vs: diff --git a/source/carbonio/admincli/advancedadmin.rst b/source/carbonio/admincli/advancedadmin.rst index c7045c3cc..d8c26c587 100644 --- a/source/carbonio/admincli/advancedadmin.rst +++ b/source/carbonio/admincli/advancedadmin.rst @@ -312,3 +312,152 @@ command, as the ``postgres`` user This command removes dead tuples (rows) to reduce the space used and keep database performances at an optimal level. + +Trust Self-Signed Certificates +------------------------------ + +This guide explains how to configure |product| to trust either a +*self-signed certificate* or a certificate *signed by an internal +Certificate Authority (CA)* when connecting to a remote backend +endpoint (e.g., S3-compatible storage or LDAP databases) protected by +self-signed certificates. + +For these connections to be successful and to avoid warnings and +communication errors, it is mandatory to import the root or +intermediate CA into: + +#. The Operating System’s trust store, to allow system tools to trust + the certificate + +#. The Jetty keystore of |product|, to allow internal services, like + ``mailboxd``, to establish secure TLS connections without warnings + or failures + +To achieve these results, carry out this procedure on all Nodes that +should access the backend. For example, if the remote endpoint is a +Storage, carry out the procedure on **all Nodes** installing the +*Mailstore & Provisioning* Component. + +.. card:: Preliminaries + + Before carrying out the procedure, please pay attention to the + following points. + + - **Commands**. All commands must be executed as the |ru| + - **Certificate file extension**. Ensure the certificate file has + extension ``.crt`` on Ubuntu systems + - **Certificate file permissions**. The certificate file must be + readable by the |zu| + - **Services restart**. The last step of the procedure requires to + restart |carbonio| services, otherwise the new configuration + **will not** be used + +.. rubric:: Step 1. Obtain the CA Certificate + +Ensure your CA certificate is in PEM format (we will call it +``ca.pem``): if it is in a ``.crt`` or ``.cer`` format, convert it to +PEM format. + +.. rubric:: Step 2. Import the CA Certificate into the OS + +This step ensures that all OS-level tools and libraries (e.g., ``curl``, +``wget``, backup utilities) can trust the endpoint. + +.. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + The file must have a ``.crt`` extension. + + .. code:: console + + # cp ca.pem /usr/local/share/ca-certificates/ca.crt + # update-ca-certificates + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + The file must have a ``.crt`` extension. + + .. code:: console + + # cp ca.pem /usr/local/share/ca-certificates/ca.crt + # update-ca-certificates + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # cp ca.pem /etc/pki/ca-trust/source/anchors/ + # update-ca-trust + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # cp ca.pem /etc/pki/ca-trust/source/anchors/ + # update-ca-trust + +.. rubric:: Step 3. Import the CA Certificate into |product| + +This step is mandatory to ensure that |product|’s internal Java-based +services (Jetty) trust the certificate. + +.. code:: console + + # chown zextras:zextras ca.pem + # /opt/zextras/bin/zmcertmgr addcacert ca.pem + +If successful, the output will confirm that the certificate was +added to the keystore. + +.. rubric:: Step 4. Restart the services. + +Restart |product| services to apply the changes. + +.. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + As the |zu| + + .. code:: console + + zextras$ zmcontrol restart + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + As the |ru| + + .. code:: console + + # systemctl restart carbonio-directory-server.target + # systemctl restart carbonio-appserver.target + # systemctl restart carbonio-mta.target + # systemctl restart carbonio-proxy.target + + .. tab-item:: RHEL 8 + :sync: rhel8 + + As the |zu| + + .. code:: console + + zextras$ zmcontrol restart + + .. tab-item:: RHEL 9 + :sync: rhel9 + + As the |ru| + + .. code:: console + + # systemctl restart carbonio-directory-server.target + # systemctl restart carbonio-appserver.target + # systemctl restart carbonio-mta.target + # systemctl restart carbonio-proxy.target diff --git a/source/carbonio/admincli/backup/advancedbackup.rst b/source/carbonio/admincli/backup/advancedbackup.rst index 35b749190..e82332593 100644 --- a/source/carbonio/admincli/backup/advancedbackup.rst +++ b/source/carbonio/admincli/backup/advancedbackup.rst @@ -59,7 +59,7 @@ good practices we can suggest, including the following: problems as soon as they appear - Carefully plan your updates and migrations - + - Consider implementing redundancy to replicate the services provided by |product| diff --git a/source/carbonio/admincli/backup/backuptasks.rst b/source/carbonio/admincli/backup/backuptasks.rst index e100e711b..48cd39872 100644 --- a/source/carbonio/admincli/backup/backuptasks.rst +++ b/source/carbonio/admincli/backup/backuptasks.rst @@ -152,7 +152,7 @@ restore: - Node settings, i.e., the configuration of each Node - Global settings of |product| product -- Any customizations made to the software (Postfix, Jetty, etc…​) +- Any customizations made to the software (Postfix, Jetty, etc) For every item managed by |product|, every variation in its associated metadata is recorded and saved, allowing its restore at a @@ -211,13 +211,13 @@ is enabled in the |adminui|. .. warning:: If none of the two Scan Operations is active, no backup is created! -SmartScan runs at a fixed time—​that can be configured—​on a daily basis -and is not deferred. This implies that, if for any reason (like e.g., -the server is turned off, or |carbonio| is not running), SmartScan -does **not run**, it will **not run** until the next day. You may -however configure the Backup to run the SmartScan every time -|carbonio| is restarted (although this is discouraged), or you may -manually run SmartScan to compensate for the missing run. +SmartScan runs at a fixed time (that can be customised) ​on a daily +basis and is not deferred. This implies that, if for any reason (like +e.g., the server is turned off, or |carbonio| is not running), +SmartScan does **not run**, it will **not run** until the next +day. You may however configure the Backup to run the SmartScan every +time |carbonio| is restarted (although this is discouraged), or you +may manually run SmartScan to compensate for the missing run. .. note:: Make sure that SmartScan is always running whenever you want to make any backup or restore operations, otherwise they will not @@ -689,7 +689,7 @@ Thanks to the Realtime Scanner, it is possible to recover any item at any point in time. The Realtime Scanner reads all the events of the Mailstore & -Provisioning in almost real-time, then it replicates the same +Provisioning in almost real-time, then it repeats the same operations on its own data structure, creating items or updating their metadata. No information is ever overwritten in the backup, so every item has its own complete history. @@ -840,14 +840,7 @@ A *legal hold* is a functionality that allows to preserve and protect electronic data (for example e-mails and documents) for potential use in legal proceedings or investigations. -In the context of |product|, the legal hold is a mechanism that allows -to preserve an existent account in a state that can not be -modified. This means that, as soon as an account is put in a legal -hold state, nobody can access it and no change can be made to any -items, folders, documents, or metadata. Moreover, an infinite -retention time is set on the account that will override any other -retention time defined and a *Restore on New Account* can be carried -out by the Administrator for any need. +In the context of Carbonio, the Legal Hold is a mechanism that ensures the integrity and retention of backup data for a selected account. When an account is placed under Legal Hold, the account itself remains fully operational—users can continue accessing and modifying it as usual. However, all backup states will be kept forever and preserved indefinitely, overriding any standard retention policies. This guarantees that any data present in the backup, including items that may later be modified or deleted in the live account, remain available. Additionally, administrators can perform a Restore on New Account to recreate the mailbox in its entirety, including all messages—even those that were subsequently deleted by the user. |product| makes available a set of CLI commands to manage the legal host status of the accounts: :command:`carbonio backup legalHold {get @@ -1076,9 +1069,9 @@ Backup on External Storage As described in section :ref:`backup-architecture`, |backup| is composed of metadata and blobs (compressed and deduplicated), saved by -default on the same folder—​or mounted volume—​specified in the *Backup -Path*. The real-time backup requires that the Backup Path be fast -enough to avoid queuing operations and/or risk data loss. +default on the same folder ​(or mounted volume) ​specified in the +*Backup Path*. The real-time backup requires that the Backup Path be +fast enough to avoid queuing operations and/or risk data loss. However, S3 buckets, NFS shares, and other storage mounted using Fuse can be very slow and might not be suited as storage mounted on the diff --git a/source/carbonio/admincli/backup/restorestrategies.rst b/source/carbonio/admincli/backup/restorestrategies.rst index 061de632b..88ffc6fe3 100644 --- a/source/carbonio/admincli/backup/restorestrategies.rst +++ b/source/carbonio/admincli/backup/restorestrategies.rst @@ -25,8 +25,8 @@ restore, and the purpose. Some examples of these scenarios are: In all these cases, data in a mailbox can be recovered and, depending on the destination of the recovered data, restore strategies are -grouped in **two** categories: recovery on the same server—​or same -infrastructure—​and recovery on a different infrastructure. +grouped in **two** categories: recovery on the same server or +infrastructure ​and recovery on a different infrastructure. Same infrastructure restore These strategies are meant to be used when you need to restore only @@ -308,7 +308,7 @@ called ``unknown_XX``. 2. Suppose you have a folder called **Conference 2021**, then delete all of the item it contains and rename it to **Conference 2022**. You - later—​on 15th of November 2021—​carry out an Undelete Restore on the + later (e.g., on 15th of November 2021) ​carry out an Undelete Restore on the mailbox. All of the items and content will be restored under folder **Conference 2021** and tagged as ``undelete_15_11_21``. @@ -373,7 +373,7 @@ External Restore The External Restore allow to import backups that were produced on a different infrastructure, which is useful for setting up a test environment that resembles the production environment, and for advanced -tasks like migration—​of accounts or of whole domains—​or disaster +tasks like migration, ​of accounts or of whole domains, ​or disaster recovery. Moreover, it is the only strategy for which the source server and the destination server could **not** be the same. diff --git a/source/carbonio/admincli/carbonioauth.rst b/source/carbonio/admincli/carbonioauth.rst index ef257c8d0..243cbf25e 100644 --- a/source/carbonio/admincli/carbonioauth.rst +++ b/source/carbonio/admincli/carbonioauth.rst @@ -1,4 +1,3 @@ - .. _carbonio_auth: ============ @@ -47,10 +46,10 @@ Supported Authentication Methods .. grid-item-card:: Self Service Credentials Management :columns: 12 12 6 6 - Self-service credential management allows every user to create new - passwords and QR codes for third-parties—​for example team members, - personal assistants—​accessing her/his email account and |Carbonio| - Applications from mobile devices. + Self-service credential management allows every user to create + new passwords and QR codes for third-parties, for example team + members or personal assistants, ​accessing their e-mail account + and |Carbonio| Applications from mobile devices. QR Codes in particular can be used to access Mobile Apps, currently |team| and |file|. diff --git a/source/carbonio/admincli/carboniodocs.rst b/source/carbonio/admincli/carboniodocs.rst index 655ba6310..e262d659c 100644 --- a/source/carbonio/admincli/carboniodocs.rst +++ b/source/carbonio/admincli/carboniodocs.rst @@ -14,81 +14,18 @@ change. How to Modify Values -------------------- -Values can be changed by using, from any Node, the |mesh| kv -interface: you can access it using the :command:`consul` command from -the CLI. - -* To verify the current value of any key, use command - - .. code:: console - - # consul kv get -token="$CONSUL_TOKEN_PATH" "$KEY" - -* To modify one of the values reported in the tables below, use - command - - .. code:: console - - # consul kv put -token="$CONSUL_TOKEN_PATH" "$KEY" "$VALUE" - - When changing any of this values, they are immediately picked up by - the system, without the need to restart any services. - -In the commands, ``$CONSUL_TOKEN_PATH`` is the |mesh| secret stored on -the **Directory Service server**, while ``$KEY`` and ``$VALUE`` are -the *key name* and the *new value*, respectively, as written in the -tables. - -.. hint:: The |mesh| token can be retrieved using the procedure - described in section :ref:`ts-token`. +.. include:: /_includes/_admincli/kv.rst .. _docs-opt: -Docs Configuration tables +Docs Configuration Tables ------------------------- -The following table shows the keys and their default values to -configure the maximum file size of documents that |docs| can manage -and open. - -.. _docs-sizeopt: - -.. card:: File sizes - - The following values can be modified via the |mesh| interface (see - Section :ref:`mesh-gui`) or via the CLI, using the commands - presented in the :ref:`previous section `. - - .. csv-table:: - :header: "Key name", "Default value" - :widths: 70, 30 - - "carbonio-docs-connector/max-file-size-in-mb/document", "50" - "carbonio-docs-connector/max-file-size-in-mb/presentation", "100" - "carbonio-docs-connector/max-file-size-in-mb/spreadsheet", "10" +.. include:: /_includes/_admincli/docs-tables.rst .. _files-opt: -Files Configuration -------------------- - -The following table shows how to modify the maximum number of versions -for each document stored in |file|. - -.. _files-max-versions: - -.. card:: Maximum number of versions - - Using the |mesh| :ref:`kv interface `, it is possible - to change the maximum number of versions stored for each supported - file (text and word processor documents, spreadsheets, - presentations). - - .. csv-table:: - :header: "Key name", "Default value" - :widths: 70, 30 - - "carbonio-files/max-number-of-versions", "30" +Files Configuration Tables +-------------------------- - You can raise the default **30** number, but keep in mind that this - implies that you need more storage to keep all versions. +.. include:: /_includes/_admincli/files-tables.rst diff --git a/source/carbonio/admincli/mesh.rst b/source/carbonio/admincli/mesh.rst index 3cc0d9497..69756fa02 100644 --- a/source/carbonio/admincli/mesh.rst +++ b/source/carbonio/admincli/mesh.rst @@ -1,12 +1,10 @@ -.. SPDX-FileCopyrightText: 2022 Zextras -.. -.. SPDX-License-Identifier: CC-BY-NC-SA-4.0 =================== Working with |mesh| =================== -This section contains advanced topics about |mesh|. +.. include:: /_includes/_admincli/mesh/intro.rst + .. toctree:: :maxdepth: 1 diff --git a/source/carbonio/admincli/mesh/agent.rst b/source/carbonio/admincli/mesh/agent.rst new file mode 100644 index 000000000..cf8d82168 --- /dev/null +++ b/source/carbonio/admincli/mesh/agent.rst @@ -0,0 +1,11 @@ +================= + Managing Agents +================= + +In this section we present some procedure to fix issues related to +|mesh| agents. + +No UDP Connections +================== + +.. include:: /_includes/_admincli/mesh/agentnoudp.rst diff --git a/source/carbonio/admincli/mesh/commands.rst b/source/carbonio/admincli/mesh/commands.rst new file mode 100644 index 000000000..aabb9c84c --- /dev/null +++ b/source/carbonio/admincli/mesh/commands.rst @@ -0,0 +1,57 @@ +.. _mesh-ops: + +=================== + Common Operations +=================== + +This section shows a few important commands used when working with +|mesh| clusters and services. + +Whenever executing a command using |mesh| interface, +:command:`consul`, the **bootstrap token** is required. Refer to +Section :ref:`mesh-token` to learn how to obtain it and how to deal +with it. + +.. index:: bootstrap token; retrieve +.. index:: Carbonio Mesh bootstrap token + +.. _mesh-token: + +Retrieve Bootstrap Token +======================== + +.. include:: /_includes/_admincli/mesh/gettoken.rst + +.. commands + +.. include:: /_includes/_admincli/mesh/commands.rst + +.. temporarily left here, to be moved in Scenario RWUMR when it will + be reviewed + +.. _ar-ts: + + +Active Replica +============== + +When you set up :ref:`activereplica`, the following commands can prove +useful to verify the status of the service. + +.. rubric:: Verify Configuration + +.. code:: console + + zextras$ carbonio config get global brokers + +.. rubric:: Verify Endpoint Availability + +.. code:: console + + zextras$ carbonio ha test 10.0.10.11:9092,10.0.10.12:9092,10.0.10.13:9092 + +.. rubric:: Restart the HA service + +.. code:: console + + zextras$ carbonio ha doRestartService module diff --git a/source/carbonio/admincli/mesh/credentials.rst b/source/carbonio/admincli/mesh/credentials.rst index d792e2536..2e6cf2999 100644 --- a/source/carbonio/admincli/mesh/credentials.rst +++ b/source/carbonio/admincli/mesh/credentials.rst @@ -45,14 +45,12 @@ company), it is necessary to :ref:`mesh-reset`. Reset |mesh| Credentials ------------------------ -On a Multi-Server, before starting the procedure it is necessary to -identify the **Leader Node**, on which to carry out some preliminary -tasks, then wipe the old secret, generate the new one, and finally set -up the other nodes by copying the credentials on the remaining nodes -and restart the service. - -.. include:: /_includes/_admincli/mesh/leaderip.rst - +Before starting the procedure it is necessary to identify the **leader +node**, on which to carry out some preliminary tasks, then wipe the +old token, generate the new one, and finally set up the other nodes +by copying the credentials on the remaining nodes and restart the +service. Instructions to find the leader node can be found in Section +:ref:`mesh-find-leader`. Wipe Old Credentials ~~~~~~~~~~~~~~~~~~~~ @@ -60,8 +58,10 @@ Wipe Old Credentials Please take into account that the |mesh| service will be **offline** for the whole duration of the procedure. -Before starting the procedure, we need to know important -information. Log in to Leader Node and execute command +Before starting the procedure, we need to know an important +information. Log in to **leader node** (see Section +:ref:`mesh-find-leader` to find which is the leader node) and execute +command .. include:: /_includes/_admincli/mesh-credentials-index.rst diff --git a/source/carbonio/admincli/mesh/externalservices.rst b/source/carbonio/admincli/mesh/externalservices.rst deleted file mode 100644 index ce99347a0..000000000 --- a/source/carbonio/admincli/mesh/externalservices.rst +++ /dev/null @@ -1,446 +0,0 @@ -.. SPDX-FileCopyrightText: 2022 Zextras -.. -.. SPDX-License-Identifier: CC-BY-NC-SA-4.0 - -.. _mesh-external-services: - -Integration of External Services --------------------------------- - -A typical example of external service integration is a cluster -interacting with a database instance hosted by a third-party service -provider. To deploy in |product| situations like this one, |mesh| is -used. - -Scenario and Requirements -~~~~~~~~~~~~~~~~~~~~~~~~~ - -Our sample scenario consists of a |product| Multi-Server installation -which includes: - -* One or more |file| Nodes - -* One node in the cluster (possibly different from the |file| Nodes) - elected as **terminating gateway** - -* A PostgreSQL database, which is used by |file|, which is either - - * A server outside the |product| infrastructure - * Hosted remotely by a third-party provider - - .. note:: We will refer to this node as *database node* in the - remainder of this guide. - -.. _fig-mesh-scenario: - -.. figure:: /img/carbonio/external.png - :width: 80% - - The sample scenario used, with two |file| nodes and a database - hosted remotely. - -.. topic:: Terminating Gateway - - In ``consul`` terminology, a **terminating gateway** is a cluster - node that takes the responsibility to communicate with an external - resource. All services running on the cluster that need to access - this resource will contact the terminating gateway, which will - forward the request and send back the output received by the - resource. The services do not need to know anything about the - resource: they just contact the terminating gateway and wait for - the response. - - Each terminating gateway is responsible for one service only, in - case of multiple services need to access external resources, you - need to spawn multiple instances of a terminating gateway. - -The setup requires to access the command line on the terminating -gateway to configure it, because the process requires manual file -editing and running commands, although some commands towards the end -of the procedure requires to access the *database node*. - -.. hint:: It is highly suggest to use the |mesh| Administration - Interface to better keep track of the configuration and - changes. Please check :ref:`mesh-gui` for directions on how to - configure it and reach it. - -Finally, keep the **cluster credential password** at hand, because it -is required for token generation. - -Let's now start with the procedure, in which we first set up |mesh|, -then install |file|. - -Security and Setup -~~~~~~~~~~~~~~~~~~ - -The initial setup requires to complete a few steps. - -.. note:: All commands must be executed on the node elected as - **terminating gateway**, unless stated differently. - -#. Create a dedicated **user** - - .. code:: console - - # groupadd -r 'carbonio-gateway' - # useradd -r -M -g 'carbonio-gateway' -s /sbin/nologin 'carbonio-gateway' - -#. Define **policies**. It is necessary to make |mesh| aware of the - services to be routed, which in our scenario is the database for - |file|, :bdg:`carbonio-files-db`. - - First, create a directory that will store all the configuration. - - .. code:: console - - # mkdir -p /etc/carbonio/gateway/service-discover/ - - Then edit file - :file:`/etc/carbonio/gateway/service-discover/policies.json` and - paste in it this content. - - .. code:: json - - { - "key_prefix": [ - { - "carbonio-gateway/": { - "policy": "read" - } - } - ], - "node_prefix": [ - { - "": { - "policy": "read" - } - } - ], - "service": [ - { - "carbonio-gateway": { - "policy": "write" - }, - "carbonio-files-db": { - "policy": "write" - } - } - ] - } - - Finally, let ``consul`` pick up the new policy. - - .. code:: console - - # consul acl policy create -name "carbonio-gateway-policy" -description "Policy for carbonio-gateway" -rules @/etc/carbonio/gateway/service-discover/policies.json - -#. Export a new **bootstrap token**, which is the one that allows to - execute ``consul`` commands and access its APIs. To extract the - bootstrap token, execute the following command and then type the - **cluster credential password**. - - .. code:: console - - # export CONSUL_HTTP_TOKEN=$(service-discover bootstrap-token --setup) - -#. Generate a new **token**, which is associated to the policy and - will be the only one needed to communicate with the external - database. - - .. code:: console - - # consul acl token create -format json -policy-name carbonio-gateway-policy -description "Token for carbonio-gateway" | jq -r '.SecretID' > /etc/carbonio/gateway/service-discover/token - - # chown carbonio-gateway:carbonio-gateway -R /etc/carbonio/gateway - -Definition of the External Service -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To operate properly, the terminating gateway must be aware of the -exact location of the remote service, therefore we define both the -external service and how the terminating gateway can reach it and -allow |file| nodes access to it. - -There is yet no CLI command for this, but we can use the APIs for this -purpose. Create file -:file:`/etc/carbonio/gateway/service-discover/carbonio-files-db-external.json` -with content - -.. code:: json - - { - "Address": "external-database.example.com", - "Node": "external-files-db-node", - "NodeMeta": { - "external-node": "true", - "external-probe": "true" - }, - "Service": { - "ID": "carbonio-gateway", - "Port": 5432, - "Service": "carbonio-files-db" - } - } - -.. note:: Replace the value of **Address** with the actual URL of the - external service. - -Then, execute a ``curl`` request to register the external service. - -.. code:: console - - # curl --request PUT --header "X-Consul-Token: ${CONSUL_HTTP_TOKEN}" --data @carbonio-files-db-external.json http://localhost:8500/v1/catalog/register - -Services Routing -~~~~~~~~~~~~~~~~ - -Now that the terminating gateway and the service have been defined and -registered, it is time to let |mesh| know the list of the services -that can use the gateway. - -To do so, place in file -:file:`/etc/carbonio/gateway/service-discover/gateway-config.hcl` the -following content, which defines a carbonio-gateway as a terminating -gateway for the ``carbonio-files-db`` service. - -.. code:: yaml - - Kind = "terminating-gateway" - Name = "carbonio-gateway" - Services = [ - { - Name = "carbonio-files-db" - #CAFile = "/etc/carbonio/external-db-ca.pem" - #SNI = "external-db.local" - } - ] - -There are two commented entries in the above file: they are optional and may -not be specified at all in the configuration. - -**CAFile** - A specific SSL certificate for the service. This is usually not - necessary, unless some very specific and complex scenario is set - up. Indeed, it is ``consul`` that take charge of encrypting all the - traffic among the nodes and with the external resources: services - and clients contact ``consul`` on **localhost**, so it is safe that - they talk in plain text with it. Data received from ``consul`` on - localhost are immediately SSL-encrypted, before leaving the node. - -**SNI** - The Server Name Indication is an additional layer of security on - top of TLS, used to prevent name mismatch. In the common case that - a single web server hosts many domains each with its own SSL - certificate, whenever a client request is received, it may not be - yet known by the web server which is the exact domain the client is - trying to access, because the HTTPS TSL/SSL handshake takes place - before the client send the actual HTTP request for the domain. This - may cause the client to receive the wrong certificate and possibly - terminate the secure connection. Using a SNI avoids this problem, - because it allows to send the domain name right in the SSL/TSL - handshake. - -Make sure to write the configuration, by issuing the following -command. - -.. code:: console - - # consul config write /etc/carbonio/gateway/service-discover/gateway-config.hcl - -At this point, we are almost done: configuration of |mesh| has now -been completed. Let's now go through the last few tasks. - -Systemd Service -~~~~~~~~~~~~~~~ - -Now, create a ``systemd`` unit to control whether the carbonio gateway -is enabled or not and therefore whether access to the external DB is -allowed. Create file -:file:`/lib/systemd/system/carbonio-gateway.service` and configure it -with these content. - -.. code:: Ini - - [Unit] - Description=Carbonio gateway for external services - Documentation=https://docs.zextras.com/ - Requires=network-online.target - After=network-online.target - - [Service] - Type=simple - ExecStart=/usr/bin/consul connect envoy \ - -token-file /etc/carbonio/gateway/service-discover/token \ - -admin-bind localhost:0 \ - -gateway=terminating \ - -register -service carbonio-gateway - Restart=on-failure - RestartSec=15 - User=carbonio-gateway - KillMode=process - KillSignal=SIGKILL - LimitNOFILE=65536 - TimeoutSec=120 - TimeoutStopSec=120 - - [Install] - WantedBy=multi-user.target - -.. hint:: You can modify the ``ExecStart`` option by adding ``-- -l - debug`` at the end to produce more verbose logs. The option should - then look like:: - - ExecStart=/usr/bin/consul connect envoy \ - -token-file /etc/carbonio/gateway/service-discover/token \ - -admin-bind localhost:0 \ - -gateway=terminating \ - -register -service carbonio-gateway -- -l debug - -Once saved the file, reload ``systemd`` to make it aware of the new unit file, then -enable the new ``carbonio-gateway`` service. - -.. code:: console - - # systemctl daemon-reload - # systemctl enable carbonio-gateway - -Configuration of ``carbonio-files-db`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. note:: This step only applies when the external resource is a - database, like in our scenario. - -The configuration of the database, which includes transferring the DB -credentials to |mesh| and create the DB's, is usually done by the -:command:`carbonio-files-db-bootstrap` script. However, since the -*carbonio-files-db* package is not installed, this task must be done -manually using these commands on the terminating gateway. - -* configure database name - - .. code:: console - - # consul kv put carbonio-files/db-name - -* configure username - - .. code:: console - - # consul kv put carbonio-files/db-username - -* configure password - - .. code:: console - - # consul kv put carbonio-files/db-password - -Now, let's log in to the *database node*, where it is necessary to -create a ``postgres`` superuser. In this example, we assign password -**ScrtPsw987^2** to the user. Make sure to use a strong password of -your choice. - -First, become the ``postgres`` user and start a direct access to the -database using the CLI client. - -.. code:: console - - # sudo -u postgres psql - -Then issue the following commands to create the user. - -.. code:: console - - # CREATE ROLE "carbonio-files-adm" WITH LOGIN SUPERUSER encrypted password 'ScrtPsw987^2'; CREATE DATABASE "carbonio-files-adm" owner "carbonio-files-adm"; - -Once done, exit the client. - -.. code:: console - - # \q - -|file| Nodes Installation -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The installation of |File| is slightly different from the standard one -in a Multi-Server. In particular, make sure that after the -installation, the package :bdg:`carbonio-files-db` is **not** -installed on any node. In our scenario, indeed, the database -functionalities are not provided by that package, but by the external -service. Hence, to avoid conflicts, you need to uninstall it. - -* Install package ``carbonio-files-ui`` on each *Proxy Node*. - - .. tab-set:: - - .. tab-item:: Ubuntu - :sync: ubuntu - - .. code:: console - - # apt install carbonio-files-ui - - .. tab-item:: RHEL - :sync: rhel - - .. code:: console - - # dnf install carbonio-files-ui - -* Install these packages on both Nodes on which |file| should run. We - suggest to install them on the two *Stores Nodes*. - - .. tab-set:: - - .. tab-item:: Ubuntu - :sync: ubuntu - - .. code:: console - - # apt install carbonio-storages-ce carbonio-files-ce carbonio-user-management - - .. tab-item:: RHEL - :sync: rhel - - .. code:: console - - # dnf install carbonio-storages-ce carbonio-files-ce carbonio-user-management - - The installation will end with message:: - - ====================================================== - Carbonio Files installed successfully! - You must run pending-setups to configure it correctly. - ====================================================== - - Hence, execute :command:`pending-setups` - - .. code:: console - - # pending-setups -a - -Remove Services From Catalog -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When the external resource is not needed anymore, for example because -the database is brought in the company's data center, it is -straightforward to remove the configuration of the services. - -* Stop the systemd unit service and delete the configuration - file - - .. code:: console - - # systemd stop carbonio-gateway - # systemd disable carbonio-gateway - # rm /lib/systemd/system/carbonio-gateway.service - -* Remove the gateway configuration. - - .. code:: console - - # consul config delete -kind terminating-gateway -name carbonio-gateway - # curl --request PUT --header "X-Consul-Token: ${CONSUL_HTTP_TOKEN}" http://localhost:8500/v1/agent/service/deregister/carbonio-gateway - # curl --request PUT --header "X-Consul-Token: ${CONSUL_HTTP_TOKEN}" http://localhost:8500/v1/agent/service/deregister/carbonio-files-db - -Now you can install the *carbonio-files-db* package on any node and it -will be immediately available to the |file| nodes. diff --git a/source/carbonio/admincli/mesh/leadernode.rst b/source/carbonio/admincli/mesh/leadernode.rst new file mode 100644 index 000000000..f8111d499 --- /dev/null +++ b/source/carbonio/admincli/mesh/leadernode.rst @@ -0,0 +1,19 @@ +========================== + Managing the Leader Node +========================== + +This section collects some useful how-tos that help in the management +of the Leader Node. + +.. _mesh-find-leader: + +Find the Leader Node +==================== + +.. include:: /_includes/_admincli/mesh/findleader.rst + + +Missing Leader Node +=================== + +.. include:: /_includes/_admincli/mesh/missingleader.rst diff --git a/source/carbonio/admincli/mesh/mesh-gui.rst b/source/carbonio/admincli/mesh/mesh-gui.rst index 677add822..313e71724 100644 --- a/source/carbonio/admincli/mesh/mesh-gui.rst +++ b/source/carbonio/admincli/mesh/mesh-gui.rst @@ -1,7 +1,3 @@ -.. SPDX-FileCopyrightText: 2022 Zextras -.. -.. SPDX-License-Identifier: CC-BY-NC-SA-4.0 - .. _mesh-gui: |mesh| Administration Interface @@ -14,7 +10,7 @@ the configuration generated by |product|, you need first to create a new token, then to set up an SSH tunnel from the current workstation to the |product| server. -The latter step is mandatory because, For security reasons, |mesh| +The latter step is mandatory because, for security reasons, |mesh| only listens on ``localhost``. Requirements diff --git a/source/carbonio/admincli/mesh/rejoin-cluster.rst b/source/carbonio/admincli/mesh/rejoin-cluster.rst new file mode 100644 index 000000000..95775e781 --- /dev/null +++ b/source/carbonio/admincli/mesh/rejoin-cluster.rst @@ -0,0 +1,46 @@ +======================= + Rejoin |mesh| Cluster +======================= + +When a member of a |mesh| cluster stays offline for a longer time than +the value of ``server_rejoin_age_max``, it will be unable to rejoin the cluster +cluster and the following error appears in syslog:: + + refusing to rejoin cluster because server has been offline for more than the configured server_rejoin_age_max + +.. hint:: You can use the command :command:`journalctl -u + service-discover.service -f` to check the log and see new log + message as they are produced, in real time. + +A typical scenario for this error is when the Node, on which the +member is installed, is restored from an old snapshot. + +A viable solution is quite easy and requires a few commands from +the CLI. + +First, delete the following file. + +.. code:: console + + # rm /var/lib/service-discover/data/server_metadata.json + +Then restart the ``service-discover`` daemon + +.. code:: console + + # systemctl restart service-discovery + +If the command is successful, this message appears in syslog:: + + Join cluster completed. + +To make sure that the |mesh| agent is synchronised with the other +members, issue the following commands. + +.. code:: console + + # consul members + +.. code:: console + + # consul catalog services diff --git a/source/carbonio/adminpanel/adminroles.rst b/source/carbonio/adminpanel/adminroles.rst index b6f1780ba..563c0a9fd 100644 --- a/source/carbonio/adminpanel/adminroles.rst +++ b/source/carbonio/adminpanel/adminroles.rst @@ -580,14 +580,11 @@ Administrator is suited for **user-level** support. * View domain attributes - * Modify user information (personal data, preferences, - ActiveSync access) + * Modify user information (personal data, preferences) * Reset and assign user passwords, application credentials, and OTP codes - * Suspend and reset ActiveSync, HTTP, and IMAP sessions - * Undelete emails, calendars, and contacts .. tab-item:: Limitations @@ -607,11 +604,10 @@ Administrator is suited for **user-level** support. * personal data * user preferences - * enable or disable activesync access + * Reset and Assign User Passwords, application credentials, and OTP codes - * Suspend and Reset ActiveSync sessions * Suspend and Reset HTTP/IMAP sessions * Undelete emails, calendars, and contacts @@ -624,7 +620,7 @@ Administrator is suited for **user-level** support. "View domain attributes", "|y|", "|y|" "Modify user personal info and preferences", "|y|", "|n|" "Reset passwords, OTPs, and Auth (mobile/apps) credentials", "|y|", "|n|" - "Suspend/reset ActiveSync, HTTP, IMAP sessions", "|y|", "|n|" + "Suspend/reset HTTP, IMAP sessions", "|y|", "|n|" "Restore deleted emails, calendars, contacts", "|y|", "|n|" "Create/edit/delete user accounts", "|n|", "|n|" "Create/edit/delete distribution lists", "|n|", "|y|" diff --git a/source/carbonio/adminpanel/domains/manage.rst b/source/carbonio/adminpanel/domains/manage.rst index c54a2bfaa..8faaf64de 100644 --- a/source/carbonio/adminpanel/domains/manage.rst +++ b/source/carbonio/adminpanel/domains/manage.rst @@ -277,9 +277,8 @@ restore the COS value. It is possible to prevent the user to access some of the |product| features by using the switches for the various Components. For example, - `Web feature` means access to the web interface, `Mobile App` - allows access via cell phone or tablet, `ActiveSync remote access` - enables the ActiveSync access. + `Web feature` means access to the web interface and `Mobile App` + allows access via cell phone or tablet. .. _act-prefs: @@ -513,22 +512,23 @@ Resources .. include:: /_includes/_adminpanel/_domains/resources.rst -.. _ap-sync: +.. Commented according to CO-2145 + .. _ap-sync: -ActiveSync -========== + ActiveSync + ========== -This page gives information about all accounts connected using the -ActiveSync protocol. For each connected device, some information is -shown, including its unique Device ID and the time when it last -connected. Clicking any of the connections will show additional -information, including client data and the device's ABQ status (see -:ref:`mobile_abq_allowblockquarantine_device_control`) + This page gives information about all accounts connected using the + ActiveSync protocol. For each connected device, some information is + shown, including its unique Device ID and the time when it last + connected. Clicking any of the connections will show additional + information, including client data and the device's ABQ status (see + :ref:`mobile_abq_allowblockquarantine_device_control`) -The following actions can be carried out: :bdg-primary-line:`WIPE -DEVICE` (bring the connected device back to factory settings), to -:bdg-primary-line:`RESET DEVICE` (log out the device from the -account), and :bdg-primary-line:`SUSPEND` the connection. + The following actions can be carried out: :bdg-primary-line:`WIPE + DEVICE` (bring the connected device back to factory settings), to + :bdg-primary-line:`RESET DEVICE` (log out the device from the + account), and :bdg-primary-line:`SUSPEND` the connection. .. _restore-account: diff --git a/source/carbonio/architecture/components.rst b/source/carbonio/architecture/components.rst index b695eac7e..a9cd9bab6 100644 --- a/source/carbonio/architecture/components.rst +++ b/source/carbonio/architecture/components.rst @@ -36,9 +36,14 @@ The Components of |product| This is the list of Components that make up a |product| installation. -.. note:: Postgres, Grafana, Zookeper, and Kafka are third-party - software that are installed from their respective official - repositories. +When installing the Components, be careful that: + +* Postgres, Grafana, Zookeper, and Kafka are third-party software that + are installed from their respective official repositories + +* On each Node, you need to install package + ``service-discover-agent``, except on the Node on which + ``service-discover-server`` is installed .. grid:: 1 1 2 2 :gutter: 3 @@ -72,6 +77,7 @@ This is the list of Components that make up a |product| installation. * carbonio-mailbox-db * carbonio-docs-connector-db * carbonio-notification-push-db + * carbonio-tasks-db .. grid-item-card:: Mesh :columns: 6 @@ -97,9 +103,7 @@ This is the list of Components that make up a |product| installation. * carbonio-files-ui * carbonio-tasks-ui * carbonio-ws-collaboration-ui - * carbonio-search-ui * carbonio-avdb-updater - * service-discover-agent * carbonio-catalog * carbonio-chats-ui |dprc| @@ -112,7 +116,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-mta - * service-discover-agent .. grid-item-card:: Mailstore & Provisioning :columns: 6 @@ -123,7 +126,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-advanced - * service-discover-agent .. grid-item-card:: Files :columns: 6 @@ -134,7 +136,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-files - * service-discover-agent .. grid-item-card:: Docs & Editor :columns: 6 @@ -146,7 +147,6 @@ This is the list of Components that make up a |product| installation. * carbonio-docs-connector * carbonio-docs-editor - * service-discover-agent .. grid-item-card:: Preview :columns: 6 @@ -157,7 +157,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-preview - * service-discover-agent .. grid-item-card:: Tasks :columns: 6 @@ -168,7 +167,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-tasks - * service-discover-agent .. grid-item-card:: |wsc| :columns: 6 @@ -180,7 +178,6 @@ This is the list of Components that make up a |product| installation. * carbonio-message-dispatcher * carbonio-ws-collaboration - * service-discover-agent * carbonio-push-connector * carbonio-notification-push @@ -194,7 +191,6 @@ This is the list of Components that make up a |product| installation. * carbonio-videoserver-advanced * carbonio-videorecorder - * service-discover-agent .. grid-item-card:: Monitoring :columns: 6 @@ -206,7 +202,6 @@ This is the list of Components that make up a |product| installation. * carbonio-prometheus * grafana - * service-discover-agent .. grid-item-card:: Event Streaming :columns: 6 @@ -228,7 +223,6 @@ This is the list of Components that make up a |product| installation. Packages: * carbonio-directory-server - * service-discover-agent .. grid-item:: :columns: 1 @@ -241,7 +235,6 @@ This is the list of Components that make up a |product| installation. * carbonio-videoserver * carbonio-videoserver-recorder - * service-discover-agent .. _multiserver-installation: diff --git a/source/carbonio/architecture/components/component-ds-replica.rst b/source/carbonio/architecture/components/component-ds-replica.rst index ab39687ae..06973666f 100644 --- a/source/carbonio/architecture/components/component-ds-replica.rst +++ b/source/carbonio/architecture/components/component-ds-replica.rst @@ -12,12 +12,13 @@ following requirements are satisfied. * A |product| infrastructure is already operating correctly -* A new Node is available, on which to install the Replica, which +* A new Node is available, on which to install the Directory Replica, which satisfies the :ref:`carbonio-requirements` and on which the :ref:`preliminary` have already been executed - .. note:: You can also install the Replica on an existent Node, but - not on the same Node which features the Primary Directory Component. + .. note:: You can also install the Directory Replica on an existent + Node, but not on the same Node which features the Master |ds| + Component. * Pay attention that some commands **must be executed** as the ``zextras`` user, while other as the ``root`` user @@ -26,8 +27,8 @@ following requirements are satisfied. `ds-replica.example.com` whenever necessary. Remember to replace it with the name you give. -* Have CLI access to the Main and Replica, as you need to execute - commands on both servers +* Have CLI access to the Master |ds| and the Directory Replica, as you + need to execute commands on both servers .. _replica-installation: @@ -43,10 +44,10 @@ Configuration ~~~~~~~~~~~~~ Configuring the Directory Replica server requires a few steps. Make -sure to execute the commands on the Master or Replica, as shown in each -step description. +sure to execute the commands on the Master or Directory Replica, as +shown in each step description. -.. card:: Step 1: Activate replica on Master +.. card:: Step 1: Activate Directory Replica on Master Activate the replica by executing as the ``zextras`` user @@ -67,7 +68,7 @@ step description. services (replication, postfix, amavis, and nginx), retrieve also the following passwords, using :command:`zmlocalconfig`: ``ldap_replication_password``, ``ldap_postfix_password`` - `ldap_amavis_password``, ``ldap_nginx_password``. + ``ldap_amavis_password``, ``ldap_nginx_password``. .. card:: Step 3: Bootstrap |product| on Directory Replica @@ -186,9 +187,9 @@ with a few commands on the **Mesh & Directory**. If you plan to install multiple Directory Replicas, you can install all of them and then execute the above-mentioned command once for - all Replicas, making sure that their hostnames precede the **Mesh - and Directory hostname**. For example, provided you installed two - Replica Directory Servers on ``ds1-replica.example.com`` and + all Directory Replicas, making sure that their hostnames precede + the **Mesh and Directory hostname**. For example, provided you + installed two Directory Replica on ``ds1-replica.example.com`` and ``ldap://ds2-replica.example.com``, execute: .. code:: console diff --git a/source/carbonio/architecture/components/component-es.rst b/source/carbonio/architecture/components/component-es.rst index 3109d5573..ed32f3a06 100644 --- a/source/carbonio/architecture/components/component-es.rst +++ b/source/carbonio/architecture/components/component-es.rst @@ -3,8 +3,8 @@ Event Streaming =============== -This Component is required to enable the |carbonio| :ref:`Active Replica -` feature, the foundation of High Availability on +This Component is required to enable the |carbonio| +:ref:`activereplica` feature, the foundation of High Availability on |product|, and is based on Apache's *Kafka* and *ZooKeeper*, which must be installed together on the same Node. For better performances, it is strongly suggested to install both the services on a dedicated diff --git a/source/carbonio/architecture/components/component-preview.rst b/source/carbonio/architecture/components/component-preview.rst index 0b0c70bcf..b1e15edf4 100644 --- a/source/carbonio/architecture/components/component-preview.rst +++ b/source/carbonio/architecture/components/component-preview.rst @@ -35,16 +35,3 @@ Pending setups -------------- .. include:: /_includes/_installation/pset.rst - -.. _conf-memcached: - -Configure Memcached -------------------- - -.. include:: /_includes/_installation/_components/memcached.rst - -.. note:: In case you have multiple Mailstore & Provisioning or Proxy Nodes, - add all of them as a comma-separated list, for example:: - - nginx_lookup_server_full_path_urls = https://172.16.0.13:7072,https://172.16.0.23:7072 - memcached_server_full_path_urls = 172.16.0.12:11211,172.16.0.22:11211 diff --git a/source/carbonio/architecture/systemd/adminguide.rst b/source/carbonio/architecture/systemd/adminguide.rst index de75dca21..765001f06 100644 --- a/source/carbonio/architecture/systemd/adminguide.rst +++ b/source/carbonio/architecture/systemd/adminguide.rst @@ -1,3 +1,5 @@ +.. _systemd-guide: + ``Systemd`` Usage Guide For Administrators ========================================== diff --git a/source/carbonio/architecture/systemd/targets.rst b/source/carbonio/architecture/systemd/targets.rst index 159381dfa..cdba85e48 100644 --- a/source/carbonio/architecture/systemd/targets.rst +++ b/source/carbonio/architecture/systemd/targets.rst @@ -1,3 +1,5 @@ +.. _systemd-targets: + Carbonio ``Systemd`` Targets ============================ diff --git a/source/carbonio/changelog/changelogs/202507.rst b/source/carbonio/changelog/changelogs/202507.rst new file mode 100644 index 000000000..974264efa --- /dev/null +++ b/source/carbonio/changelog/changelogs/202507.rst @@ -0,0 +1,75 @@ +Changelog 2025-07 +================= + + +New and Updated Content +----------------------- + +.. rubric:: 202507-1247 Rephrase Legal Hold + +The description of the Legal Hold feature has been modified for clarity. + +Changes in the source code can be found in :pr:`1194`. + + +.. rubric:: 202507-1236 Add a sitemap.xml to docs.zextras.com + +We added file sitemap.xml to the |product| web page. + +Changes in the source code can be found in :pr:`1179`, :pr:`1180`, and :pr:`1181`. + + +.. rubric:: 202507-1232 Documentation Changelog June 2025 + +The technical documentation's changelog for June 2025 has been published. + +Changes in the source code can be found in :pr:`1184` and :pr:`1185`. + + +.. rubric:: 202507-1230 Remove Old Architecture Diagrams + +Old architecture images have been removed. + +Changes in the source code can be found in :pr:`1187`. + + +.. rubric:: 202507-1205 New Attribute for Carbonio Tasks + +A new CLI attribute, ``carbonioFeatureTasksEnabled`` allows to show or hide |task| at COS or account level from the CLI. Corresponding options in the |adminui| allows to carry out the same operation. + +Changes in the source code can be found in :pr:`1177`. + +***** + + +Bugfix List +----------- + +.. rubric:: 202507-1252 Missing step in carbonio upgrade procedure + +We added the step to execute :command:`carbonio-mailbox-db-bootstrap`, which was missing from the upgrade procedure. + +Changes in the source code can be found in :pr:`1202`. + +.. rubric:: 202507-1242 Rename Components in Architecture Section + +A few instances of Components' names have been corrected and uniformed to their official name + +Changes in the source code can be found in :pr:`1188`. + +.. rubric:: 202507-1231 Change wrong CLI command + +A CLI command in the Scenario HA needs to run on the Node hosting the Mailstore & Provisioning instead of the MTA. + +Changes in the source code can be found in :pr:`1167`. + +.. rubric:: 202507-1228 Add Firewall Requirement for WebSocket Connections + +We added as requirement for |product| that WebSocket connections (WSS) be allowed. They use port 443 and may sometimes be blocked by firewall that carry out DPI on the packets (e.g., application firewalls) + +Changes in the source code can be found in :pr:`1178`. + +***** + +End of changelog + diff --git a/source/carbonio/conf.py b/source/carbonio/conf.py index 97c7d0d38..c409cfe4f 100644 --- a/source/carbonio/conf.py +++ b/source/carbonio/conf.py @@ -30,7 +30,7 @@ author = 'The Zextras Team' # The full version, including alpha/beta/rc tags -release = '25.6.0' +release = '25.9.0' version = release # -- General configuration --------------------------------------------------- @@ -80,6 +80,7 @@ # this is the default name anyway, adding for reference sitemap_filename = 'sitemap.xml' +todo_include_todos = False # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for @@ -129,9 +130,6 @@ 'hubhome' : '%s' %hubhome, } -# workaround for ZTD-581 -#html_extra_path = ['changelog.html', 'upgrade.html'] - # -- Options for linkcheck output -------------------------------------------- # list of URLs to ignore diff --git a/source/carbonio/index.rst b/source/carbonio/index.rst index 1026edb67..620f23e8b 100644 --- a/source/carbonio/index.rst +++ b/source/carbonio/index.rst @@ -227,12 +227,6 @@ The content is organised in multiple parts: .. grid-item-card:: Command Line Reference :columns: 12 12 6 6 :class-title: sd-font-weight-bold sd-fs-4 - :link-type: doc - :link: cli_commands/toc List of all |product| CLI Commands - .. toctree:: - :hidden: - - cli_commands/toc diff --git a/source/carbonio/install/requirements.rst b/source/carbonio/install/requirements.rst index 548a85957..95815884f 100644 --- a/source/carbonio/install/requirements.rst +++ b/source/carbonio/install/requirements.rst @@ -164,7 +164,7 @@ Additional Requirements .. _inst-websocket: -Websocket Protocol +WebSocket Protocol ------------------ .. include:: /_includes/_installation/ws-note.rst @@ -195,7 +195,7 @@ table must be forwarded only on the Node on which the Component is installed. Carbonio requires no specific ports to communicate with the Internet (outgoing traffic), unless you want push notifications to be sent to mobile devices. In this case, the Node installing the - Mailstore & Provisioning Component must be able to communicate with the + **Chats Component** must be able to communicate with the URL **https://notifications.zextras.com/firebase/** on port **443**. .. _fw-external: @@ -284,7 +284,7 @@ corresponding Component is installed, for a proper communication among used by |mesh| for message broadcasting and membership management. -.. card:: Postgres Component +.. card:: Database Component .. csv-table:: :header: "Port", "Protocol", "Service" @@ -381,5 +381,5 @@ corresponding Component is installed, for a proper communication among :widths: 10 10 80 "prometheus", "TCP", "9090" - "prometheus SSH", "TCP", "9090" + "prometheus SSH", "TCP", "9999" diff --git a/source/carbonio/install/scenarios.rst b/source/carbonio/install/scenarios.rst index d6f1b9d2a..9c6751b11 100644 --- a/source/carbonio/install/scenarios.rst +++ b/source/carbonio/install/scenarios.rst @@ -40,8 +40,8 @@ following: scenarios/scenario-essential scenarios/scenario-fullsmall scenarios/scenario-fullstandard - scenarios/scenario-fullredundant - scenarios/scenario-ha + scenarios/scenario-redundant + scenarios/scenario-redundantwithusermailreplica Scenarios ========= @@ -77,7 +77,11 @@ Scenarios |unsup| Cluster, Files and Docs service redundancy - |unsup| LDAP master-slave replica + |unsup| LDAP (|ds|) master-slave replica + + |unsup| LDAP (|ds|) master-master replica + + |unsup| User Mail Replica .. note:: Availability of some features might require additional Nodes setup. @@ -107,7 +111,11 @@ Scenarios |unsup| Cluster, Files and Docs service redundancy - |unsup| LDAP master-slave replica + |unsup| LDAP (|ds|) master-slave replica + + |unsup| LDAP (|ds|) master-master replica + + |unsup| User Mail Replica .. grid-item-card:: Scenario *Full Small* :columns: 12 12 6 6 @@ -134,7 +142,11 @@ Scenarios |unsup| Cluster, Files and Docs service redundancy - |unsup| LDAP master-slave replica + |unsup| LDAP (|ds|) master-slave replica + + |unsup| LDAP (|ds|) master-master replica + + |unsup| User Mail Replica .. grid-item-card:: Scenario *Full Standard* :columns: 12 12 6 6 @@ -161,16 +173,20 @@ Scenarios |supp| Cluster, Files and Docs service redundancy - |supp| LDAP master-slave replica + |supp| LDAP (|ds|) master-slave replica + + |unsup| LDAP (|ds|) master-master replica + + |unsup| User Mail Replica - .. grid-item-card:: Scenario *Full Redundant* + .. grid-item-card:: Scenario *Redundant* :columns: 12 12 6 6 :class-header: sd-font-weight-bold sd-fs-5 - :link: scenario-fullredundant + :link: scenario-redundant :link-type: ref Suitable for any large infrastructure that requires scalability - and redundancy and is ready for High Availability. + and redundancy. +++++ @@ -188,4 +204,24 @@ Scenarios |supp| Cluster, Files and Docs service redundancy - |supp| LDAP master-slave replica + |supp| LDAP (|ds|) master-slave replica + + |unsup| LDAP (|ds|) master-master replica + + |unsup| User Mail Replica + +***** + +.. card:: Scenario *Redundant with User Mail Replica* + :class-header: sd-font-weight-bold sd-fs-5 + :link: scenario-rur + :link-type: ref + + This Scenario is an extension of the Redundant Scenario, therefore + it encompasses all its functionalities, adding: + + |supp| LDAP (|ds|) master-master replica + + |supp| User Mail Replica + + |supp| Centralised Storage diff --git a/source/carbonio/install/scenarios/fullsmall/manual.rst b/source/carbonio/install/scenarios/fullsmall/manual.rst index e6559aee7..9ce6642c4 100644 --- a/source/carbonio/install/scenarios/fullsmall/manual.rst +++ b/source/carbonio/install/scenarios/fullsmall/manual.rst @@ -253,13 +253,7 @@ MTA/Proxy Node .. _fms2-step6: -.. dropdown:: Step 6: Enable ``memcached`` - - .. include:: /_includes/_installation/_components/memcached-enable.rst - -.. _fsm2-step7: - -.. dropdown:: Step 7: Complete Installation +.. dropdown:: Step 6: Complete Installation After the successful package installation, start all |product| services by executing @@ -503,13 +497,7 @@ Collaboration Node .. _fsm4-step7: -.. dropdown:: Step 7: Configure ``memcached`` - - .. include:: /_includes/_installation/_components/memcached.rst - -.. _fsm4-step8: - -.. dropdown:: Step 8: Complete Installation +.. dropdown:: Step 7: Complete Installation After the successful package installation, start all |product| services by executing diff --git a/source/carbonio/install/scenarios/fullstandard/manual.rst b/source/carbonio/install/scenarios/fullstandard/manual.rst index 60e25ba82..40cbad288 100644 --- a/source/carbonio/install/scenarios/fullstandard/manual.rst +++ b/source/carbonio/install/scenarios/fullstandard/manual.rst @@ -718,13 +718,7 @@ Docs and Preview Node .. _st6-step6: -.. dropdown:: Step 6: Configure ``memcached`` - - .. include:: /_includes/_installation/_components/memcached.rst - -.. _st7-step7: - -.. dropdown:: Step 7: Complete Installation +.. dropdown:: Step 6: Complete Installation After the successful package installation, start all |product| services by executing diff --git a/source/carbonio/install/scenarios/ha/account-promotion.rst b/source/carbonio/install/scenarios/ha/account-promotion.rst deleted file mode 100644 index 8241005c2..000000000 --- a/source/carbonio/install/scenarios/ha/account-promotion.rst +++ /dev/null @@ -1,142 +0,0 @@ -.. _ha_promotion: - -HA Account Promotion -==================== - -The Active Replica mechanism underlying |product| HA is described in -Section :ref:`activereplica`. In particular, :ref:`ar-promo` shows how -to manually activate a Replica. To automatise this -process, the **habeat** Python tool has been developed to ensure -seamless account promotion with high availability. - -The :command:`habeat` tool can be downloaded from the -https://github.com/zextras/sps-habeat repository. You will need also a -number of other files from that repository, so you might want to clone -it. - -All the commands in this section must be executed as the |ru| **for -every HA Node**, i.e. for every Node listed in the column **HA Nodes** -in :numref:`tab-ha-nodes`. Taking into account our inventory file, -this means you must install and configure the utility on these Nodes: - -.. _tab-ha-fqdn: - -.. csv-table:: Nodes and FQDN - :header: "HA Node", "FQDN" - - "MTA", "mta2.example.com" - "Proxy", "proxy2.example.com" - "Mailstore & Provisioning", "mbox2.example.com" - "Collaboration", "filesdocs2.example.com" - "Video Server", "video2.example.com" - -Deploy ------- - -To copy :command:`habeat` you can use preferable utility, for example -:command:`scp`. Remember to replace ``node`` with the actual Node FQDN -as shown in :numref:`tab-ha-fqdn` or with equivalent FQDN according to -your infrastructure. - -First, copy the script and its configuration file. - -.. hint:: Before copying the configuration file, you might want to - edit it to adapt it to your infrastructure. Please refer to Section - :ref:`habeat-conf-file` below. - -.. code:: console - - # scp habeat.py root@node:/usr/local/sbin/habeat.py - # ssh root@node "mkdir -p /etc/hamon" - # scp config/habeat.yml root@node:/etc/hamon - -To configure habeat you need to add 2 units to systemd service: - -- ``habeat.service`` -- the service that should perform call of script -- ``habeat.timer`` -- the timer for define how often to run the script - -Copy them to each Node. - -.. code:: console - - # scp config/habeat.service root@node:/etc/systemd/system/ - # scp config/habeat.timer root@node:/etc/systemd/system/ - # ssh root@node "systemctl daemon-reload" - -In the ``habeat.timer`` unit we can define the condition of execution, -i.e., the interval between each script execution. - -.. code:: text - - OnCalendar=\*:0/5 # Run script every 5 minutes - -In the ``habeat.service`` unit we define the location of the script, -log file, and configuration file. - -.. code:: text - - ExecStart=/usr/local/sbin/habeat --config /etc/hamon/habeat.yml --log /var/log/habeat.log - -.. _habeat-conf-file: - -Configure ---------- - -The configuration file, which you can find also in the repository, is -similar to the following: as usual, remember to fill the options with -values suitable to your infrastructure. - -.. dropdown:: Habeat configuration file - :open: - - :: - - local: - whoami: "secondary" # marker on which dc script is running - dc_check: "primary" # marker which dc script should check - role: "appserver" # supported roles: appserver or consulserver or proxyserver - checkDownFile: "/var/tmp/appserverdown-habeat" # template for lock file related with down state - checkPromotionFile: "/var/tmp/appserverprom-habeat" # template for lock file related with promotion state - checkRestartReplicaFile: "/var/tmp/appserverrstopha" #only app server lock file for restart replica lock file - provider: consul # provider for external witness: hetrix or vcenter - threads: 5 # carbonio ha promotion number of threads - proxy_switch: "bgp" # proxy switch tool: bgp - proxy_enable_activate: false # enable run proxy switch - proxy_enable_deactivate: false # enable run proxy reverse switch - disable_ha_module: false # define if another app server down stop ha module of Carbonio - flush_cache: false # define if we need flush cache for accounts after promotion - flush_arguments_a: false # define if we need run flush cache for all application services. Used with flush_cache: true - restart_replica: false # run ha restartReplicas accounts - - primary: - proxy_ip: # this value used for check availability of proxy in primary dc - appserver_ip: delete # this value used for check availability of application in primary dc - directorysrv_ip: - consul_ips: #this values used for check availability of consul servers in primary dc - - - - - consul_vmnames: # this values used for check consul servers in monitoring server in primary dc - - svc1.example.com - - svc3.example.com - appserver_vmname: mbox1.example.com # this value used for check application server in monitoring server in primary dc - proxyserver_vmname: proxy1.example.com # this value used for check proxy server in monitoring server in primary dc - - secondary: - proxy_ip: # this value used for check availability of proxy in secondary dc - appserver_ip: # this value used for check availability of application in secondary dc - directorysrv_ip: - consul_ips: # this values used for check availability of consul servers in secondary dc - - - consul_vmnames: # this values used for check consul servers in monitoring server in secondary dc - - svc2.example.com - appserver_vmname: mbox2.example.com # this value used for check application server in monitoring server in secondary dc - proxyserver_vmname: proxy2.example.com # this value used for check proxy server in monitoring server in secondary dc - proxy_switch: - bgp: # this value used for choose proxy switch - activate: # list of command to activate proxy switch - deativate: # list of command to deactivate proxy switch - - consul: # consul provider config - hostname: 127.0.0.1 - port: 8500 - token: diff --git a/source/carbonio/install/scenarios/ha/activate-replica.rst b/source/carbonio/install/scenarios/ha/activate-replica.rst deleted file mode 100644 index 3bf3281f2..000000000 --- a/source/carbonio/install/scenarios/ha/activate-replica.rst +++ /dev/null @@ -1,79 +0,0 @@ -.. _ha-replica: - -Automatic Replica Activation -============================ - -In order to automatically promote a Replica in case of the master -becomes unavailable, you might want to download the -:command:`activateReplica.pl` script and its configuration file -:command:`activateReplica.yml` from the github repository -https://github.com/zextras/sps-ha-utils or even clone it locally. - -You need then to copy as the |ru| the :command:`activateReplica.pl` to -the :file:`/usr/local/sbin/` directory and assign it executable -permissions - -.. code:: console - - # chmod 700 /usr/local/sbin/activateReplica.pl - -The configuration file needs to be edited by adding or replacing -existing value with values that match your infrastructure. - -.. note:: Make sure you fill the correct section of the configuration - file depending if you use local or external LDAP authentication. - - -In the configuration file below, you need to provide the following -data: - -* LDAP Server hostname -* LDAP username and password -* Postgres Server hostname -* Postgres HA user and password -* Destination Appserver (Mailbox) Node - -.. dropdown:: Example configuration file - :open: - - :: - - #General - create_log: 0 - - #Local LDAP - local_ldap_server: "" - local_ldap_port: "389" - local_ldap_proto: "ldap" - local_ldap_user_dn: "uid=zimbra,cn=admins,cn=zimbra" - local_ldap_password: "" - local_ldap_searchbase: "ou=people,dc=mail,dc=example,dc=com" - local_ldap_filter: "&(!(zimbraIsSystemAccount=TRUE))(zimbraAccountStatus=active)(zimbraMailDeliveryAddress=*@demo.zextras.io)(zimbraMailHost=mail.example.com)" - local_ldap_attr: "zimbraId" local_ldap_attrs: "sn givenName mail displayName description title l st co company" - - #External LDAP - ldap_server: "" - ldap_port: 389 - ldap_proto: "ldap" - ldap_searchbase: "" - ldap_user: "" - ldap_password: "" - ldap_attr: "mail" - ldap_filter: "" - exchange_contacts: 0 - - #HA Params - pg_server: "db.example.com" - pg_port: 5432 - pg_user: "ha" - pg_password: "secure!password" - pg_db: "ha" - dst_hostname: "" - -The configuration file can be saved on the location and the name that you -prefer. We use :file:`/opt/zextras/activateReplica.yml`. You can -launch the automatic replica activation using command - -.. code:: console - - # activateReplica.pl /usr/local/sbin/activateReplica.yml diff --git a/source/carbonio/install/scenarios/ha/standard-installation.rst b/source/carbonio/install/scenarios/ha/standard-installation.rst deleted file mode 100644 index 8598d5980..000000000 --- a/source/carbonio/install/scenarios/ha/standard-installation.rst +++ /dev/null @@ -1,39 +0,0 @@ -.. _ha-install: - -Standard Carbonio Installation -============================== - -Before proceeding with the High Availability (HA) setup for Carbonio, -it is essential to complete a standard installation of all core services. -This initial setup provides the stable foundation required to create HA -infrastructure that will be built upon it. - -This scenario can be installed **only** using Ansible: you need to -setup a control node to run Ansible playbooks (please refer to section -:ref:`install-with-ansible`, then follow the directions on setting up -the control node). To access the control node, execute the following -command to log in (replace ``mail.example.com`` with the name or IP of -the control node) - - -After you have logged in to the control node, download the Ansible -inventory (see below this paragraph), replace the FQDN and values -present in the file according to your planned |product| -infrastructure. - -.. include:: /_includes/_installation/read-req-pre.rst - -.. dropdown:: Inventory - "HA" Scenario - :open: - - :download:`Download_inventory ` - - .. literalinclude:: /playbook/carbonio-inventory-ha - -Once you edited the inventory, save it in a directory of your choice -as :file:`carbonio-inventory`. Now, you can run the script: -from that directory execute the command - -.. code:: console - - ansible-playbook -i inventory zxbot.carbonio_install.carbonio_install diff --git a/source/carbonio/install/scenarios/fullredundant/ansible.rst b/source/carbonio/install/scenarios/redundant/ansible.rst similarity index 87% rename from source/carbonio/install/scenarios/fullredundant/ansible.rst rename to source/carbonio/install/scenarios/redundant/ansible.rst index f6b5cf8dd..ebda3a6a6 100644 --- a/source/carbonio/install/scenarios/fullredundant/ansible.rst +++ b/source/carbonio/install/scenarios/redundant/ansible.rst @@ -31,13 +31,13 @@ infrastructure. .. include:: /_includes/_installation/read-req-pre.rst -.. dropdown:: Inventory - "Full Redundant" Scenario +.. dropdown:: Inventory - "Redundant" Scenario :open: :download:`Download_inventory - ` + ` - .. literalinclude:: /playbook/carbonio-inventory-fullredundant + .. literalinclude:: /playbook/carbonio-inventory-redundant Once you edited the inventory, save it in a directory of your choice as :file:`carbonio-inventory`. Now, you can run the script: diff --git a/source/carbonio/install/scenarios/fullredundant/manual.rst b/source/carbonio/install/scenarios/redundant/manual.rst similarity index 100% rename from source/carbonio/install/scenarios/fullredundant/manual.rst rename to source/carbonio/install/scenarios/redundant/manual.rst diff --git a/source/carbonio/install/scenarios/ha/checks-status.rst b/source/carbonio/install/scenarios/redundantwithusermailreplica/checks-status.rst similarity index 89% rename from source/carbonio/install/scenarios/ha/checks-status.rst rename to source/carbonio/install/scenarios/redundantwithusermailreplica/checks-status.rst index 69a72eeab..2d3339aad 100644 --- a/source/carbonio/install/scenarios/ha/checks-status.rst +++ b/source/carbonio/install/scenarios/redundantwithusermailreplica/checks-status.rst @@ -1,10 +1,10 @@ -.. _ha-checks: +.. _rur-checks: -Check HA Services Status -======================== +Check |ur| Services Status +========================== This section is a collection of commands that can be used to verify -the status of |product|\'s HA and related services. +the status of |product|\'s |ur| and related services. Depending on the type of check, commands should be executed as either the |ru| or |zu|. All commands should be issued on the Node where the @@ -44,13 +44,13 @@ These are the commands to be issued as the |zu|. zextras$ carbonio config get global brokers -#. Check Carbonio HA services status +#. Check status of service .. code:: console zextras$ carbonio ha getServices -#. Check LDAP Multi Master status check +#. Check LDAP (|ds|) Multi Master status check .. code:: console diff --git a/source/carbonio/install/scenarios/ha/object-storage.rst b/source/carbonio/install/scenarios/redundantwithusermailreplica/object-storage.rst similarity index 81% rename from source/carbonio/install/scenarios/ha/object-storage.rst rename to source/carbonio/install/scenarios/redundantwithusermailreplica/object-storage.rst index 1ac2ee307..80bdee337 100644 --- a/source/carbonio/install/scenarios/ha/object-storage.rst +++ b/source/carbonio/install/scenarios/redundantwithusermailreplica/object-storage.rst @@ -1,16 +1,17 @@ -.. _ha-storage: +.. _rur-storage: Object Storage Configuration ============================ -A centralised volume is a mandatory requirement to configure an HA scenario. -This section explains the commands required to configure a MinIO or S3 bucket -in Carbonio and set it up as a centralised volume. Note that you -should already have a MinIO or S3 service at your disposal, either -within your infrastructure or purchased from a third-party, before -configuring the bucket: the commands here will only connect to the -bucket and configure it for the use with |product|. +A centralised volume is a mandatory requirement to configure a |rur| +scenario. This section explains the commands required to configure a +MinIO or S3 bucket in Carbonio and set it up as a centralised +volume. Note that you should already have a MinIO or S3 service at +your disposal, either within your infrastructure or purchased from a +third-party, before configuring the bucket: the commands here will +only connect to the bucket and configure it for the use with +|product|. All commands in this section must be executed as the |zu|. Remember to replace all the example values with values suitable with your diff --git a/source/carbonio/install/scenarios/redundantwithusermailreplica/standard-installation.rst b/source/carbonio/install/scenarios/redundantwithusermailreplica/standard-installation.rst new file mode 100644 index 000000000..1f7b8aaa1 --- /dev/null +++ b/source/carbonio/install/scenarios/redundantwithusermailreplica/standard-installation.rst @@ -0,0 +1,44 @@ +.. _rur-install: + +=========================================== + |carbonio| Preliminaries and Installation +=========================================== + +The |ur| setup for |product| builds on the **Scenario +Redundant**. Therefore, it is essential to have a working installation +of that Scenario before starting to deploy the |ur| setup: if you do +not have it yet, please refer to the installation procedure of the +:ref:`scenario-redundant`, then you can proceed to the next +section, :ref:`rur-conf`. + +Differences with Scenario Redundant +=================================== + +These are the main differences in this Scenario, compared to the +starting Scenario Redundant. + +.. rubric:: Master/Master |ds| + +This scenario includes a **Master/Master** |ds| setup, while the +Scenario Redundant a *Master/Slave*. After you complete the deployment +of this scenario, you will see two Master |ds|\s, one on +srv1.example.com and the second on srv3.example.com, and one Slave +|ds| on srv2.example.com + +.. rubric:: User Mail Replica + +|ur| is a replication mechanism that allows the Mailstore service to become +*stateless* and keep multiple instances of a mailbox. + +.. seealso:: The |ur| functionality, along with various commands to + manage and interact with it by CLI, is described in Section :ref:`activereplica` + +.. rubric:: Centralised Storage + +This is a requirement for the |ur|, to make sure that all updates to +the mailboxes remain consistent across all Mailstores. + +.. rubric:: PostgreSQL HA + +Thanks to ``patroni``, this scenario features a replicated PostgreSQL +in High Availability. diff --git a/source/carbonio/install/scenarios/ha/ha-configuration.rst b/source/carbonio/install/scenarios/redundantwithusermailreplica/ur-configuration.rst similarity index 75% rename from source/carbonio/install/scenarios/ha/ha-configuration.rst rename to source/carbonio/install/scenarios/redundantwithusermailreplica/ur-configuration.rst index 9ae5b47a0..617c7cfbe 100644 --- a/source/carbonio/install/scenarios/ha/ha-configuration.rst +++ b/source/carbonio/install/scenarios/redundantwithusermailreplica/ur-configuration.rst @@ -1,13 +1,13 @@ -.. _ha-conf: +.. _rur-conf: -Carbonio HA Configuration -========================= +Carbonio |ur| Configuration +=========================== -The main part of the installation is the set up of the HA +The main part of the installation is the set up of the |ur| infrastructure, which will be built on the scenario described in the -:ref:`previous section `. +:ref:`previous section `. -In order to complete the HA configuration, you need access to the +In order to complete the |ur| configuration, you need access to the Ansible's Control Node and of the following items: #. The inventory file you used in previous section, which you must @@ -38,13 +38,13 @@ successfully, you should have the following inventory files: - inventory_consulpassword -To configure the inventory for HA installation, you will need to add +To configure the inventory for |rur| installation, you will need to add new groups and add specific variables to the :file:`inventory` -file. Please read the following advises if you plan to add the HA +file. Please read the following advises if you plan to add the |ur| infrastructure to different Node than the one we will use in the remainder of the scenario. -.. card:: Guidelines for Components in HA Configuration +.. card:: Guidelines for Components in |ur| Configuration The initial Components assigned during the standard installation (i.e., as **master** for LDAP or **primary** for PostgreSQL) should remain @@ -58,10 +58,10 @@ remainder of the scenario. - If you plan to add extra master servers, configure them with roles **mmr** for Directory Server and **secondary** for - PostgreSQL in the HA inventory file. + PostgreSQL in the Ansible inventory file. This approach ensures that the pre-existing configurations and - initializations remain stable and compatible with the HA + initializations remain stable and compatible with the |ur| deployment. The two new groups to add at the bottom of the file are: @@ -75,9 +75,9 @@ The two new groups to add at the bottom of the file are: #kafka group [kafka] - svc1.example.com broker_id=1 - svc2.example.com broker_id=2 - svc3.example.com broker_id=3 + srv1.example.com broker_id=1 + srv2.example.com broker_id=2 + srv3.example.com broker_id=3 #. ``zookeeper_servers`` group, which will point to the Nodes where :command:`zookeper` will be installed: these are the three Cluster @@ -88,9 +88,9 @@ The two new groups to add at the bottom of the file are: #zookeeper_servers group [zookeeper_servers] - svc1.example.com zookeeper_id=1 - svc2.example.com zookeeper_id=2 - svc3.example.com zookeeper_id=3 + srv1.example.com zookeeper_id=1 + srv2.example.com zookeeper_id=2 + srv3.example.com zookeeper_id=3 You need also to add variable to existing groups. @@ -103,8 +103,8 @@ You need also to add variable to existing groups. #postgresServers group [postgresServers] - svc1.example.com postgres_version=16 patroni_role=primary - svc2.example.com postgres_version=16 patroni_role=secondary + srv1.example.com postgres_version=16 patroni_role=primary + srv2.example.com postgres_version=16 patroni_role=secondary #. The variable ``ldap_role`` must be added to the ``masterDirectoryServers`` group, and can assume the values @@ -114,30 +114,29 @@ You need also to add variable to existing groups. #masterDirectoryServers group [masterDirectoryServers] - svc1.example.com ldap_role=master - svc2.example.com ldap_role=mmr + srv1.example.com ldap_role=master + srv2.example.com ldap_role=mmr #. The ``dbsConnectorServers`` group must be filled out. DB Connectors - will be moved from Postgres server to servers in - ``[dbsConnectorServers]`` for HA. In our scenario we move them to - the Node hosting the Mailstore & Provisioning Component: - + will be moved from the Postgres Node to both Mailstore & + Provisioning Nodes, because at least one of them must always be + available at anytime and provide |ur|. .. code:: console #dbsConnectorServers group [dbsConnectorServers] - mbox1.example.com - mbox2.example.com + srv8.example.com + srv9.example.com The complete inventory file, filled according to the directions above, can be seen and downloaded here. -.. dropdown:: Inventory - "HA" Scenario +.. dropdown:: Inventory - |rur| Scenario :open: - :download:`Download_inventory ` + :download:`Download_inventory ` - .. literalinclude:: /playbook/carbonio-inventory-ha-complete + .. literalinclude:: /playbook/carbonio-inventory-rur-complete Install Zookeper and Kafka -------------------------- @@ -162,10 +161,14 @@ PstgreSQL replica # ansible-playbook -i inventory zxbot.carbonio_patroni.carbonio_replica_postgres_install +.. we need to wait for changes in the ansible playbook. While the + question has been rephrased and greenlit, the text of the answers + has not yet been decided. + Before starting the HAProxy installation, note that during the installation you will be prompted with the following question:: - Is this a full HA installation? (yes/no) + Do you want to enable MMR LDAP replica? (yes/no) - If you answer `yes`, HAProxy will be installed on all servers except the LDAP servers. - If you answer `no`, HAProxy will only be installed on the `dbconnectors`. @@ -194,7 +197,7 @@ collection: Promote Multi Master LDAP ------------------------- -It is needed only if replica is installed +It is needed only if Directory Replica is installed .. code:: console diff --git a/source/carbonio/install/scenarios/scenario-fullredundant.rst b/source/carbonio/install/scenarios/scenario-redundant.rst similarity index 92% rename from source/carbonio/install/scenarios/scenario-fullredundant.rst rename to source/carbonio/install/scenarios/scenario-redundant.rst index f1aaa0a54..17feba7f9 100644 --- a/source/carbonio/install/scenarios/scenario-fullredundant.rst +++ b/source/carbonio/install/scenarios/scenario-redundant.rst @@ -1,15 +1,16 @@ -.. _scenario-fullredundant: +.. _scenario-redundant: -======================== - Scenario Full Redundant -======================== +==================== + Scenario Redundant +==================== This scenario features all |product| functionalities and its intended use is any infrastructure that requires scalability and redundancy. Due to the large number of Nodes (15) that compose the |product| infrastructure, this scenario is designed to be deployed by using the -:ref:`scenario-rd-playbook`. +:ref:`scenario-rd-playbook`, although you can still proceed with a +manual installation. Remember to :ref:`configure the internal network ` before starting the deployment. @@ -46,7 +47,7 @@ The following ports must be opened on the :ref:`external network `, i.e., they are required for proper access to |product| from the Internet. -.. table:: Forwarded ports in Scenario "Full Redundant". +.. table:: Forwarded ports in Scenario Redundant. +-------------------+--------------------------+-------------------+ | Public hostname | Ports & Service | Balanced to | @@ -83,4 +84,4 @@ The incoming UDP streaming to the |vs| can be split as follows: :hidden: :glob: - fullredundant/* + redundant/* diff --git a/source/carbonio/install/scenarios/scenario-ha.rst b/source/carbonio/install/scenarios/scenario-redundantwithusermailreplica.rst similarity index 57% rename from source/carbonio/install/scenarios/scenario-ha.rst rename to source/carbonio/install/scenarios/scenario-redundantwithusermailreplica.rst index 6f8b74042..a0b8b3d4e 100644 --- a/source/carbonio/install/scenarios/scenario-ha.rst +++ b/source/carbonio/install/scenarios/scenario-redundantwithusermailreplica.rst @@ -1,98 +1,79 @@ -.. _scenario-ha: +.. _scenario-rur: -============= - Scenario HA -============= +================ + Scenario |rur| +================ -This section describes a |product| infrastructure which includes Components -redundancy and |ha|. The number of required Nodes, the necessary steps, -and the overall complexity involved require to pay attention to each -task that needs to be carried out. +This section describes a |product| infrastructure that builds on the +:ref:`scenario-redundant` and adds the necessary components to provide +Components redundancy and |ur|. + +The number of required Nodes, the necessary steps, and the overall +complexity involved require to pay attention to each task that needs +to be carried out. The installation of this scenario can be carried out **using Ansible -only**, so if you do not have it installed yet please refer to -Section :ref:`ansible-setup`: there you will find directions for its setup. +only**, so if you do not have it installed yet please refer to Section +:ref:`ansible-setup`: there you will find directions for its setup. This section covers the required components to set up the scenario, including load balancers, a Kafka cluster, a PostgreSQL cluster, an -object storage system like Minio or S3, and a multi-master Carbonio -Directory Server. A step-by-step approach to setting up VMs, -configuring centralised storage, and deploying HA, will guide you in +Object Storage system like Minio or S3, and a multi-master Carbonio +Directory Server. A step-by-step approach to setting up the Nodes, +configuring centralised storage, and deploying |ur|, will guide you in the procedure. -.. _ha-procedure: +.. _rur-procedure: Procedure Overview ================== The procedure to install this scenario is long and complex and it is divided into various parts for simplicity and to allow to follow it -easily. In the remainder of this page you find a scenario overview, +easily. +In the remainder of this page you find a scenario overview, requirements, and pre-installation tasks. The rest of the procedure consists of a dedicated, self-contained guide to one of the parts required to successfully complete the procedure and use the |product| infrastructure. In more details: -#. :ref:`ha-install` describes how to install the scenario proposed in - this page. - -#. :ref:`ha-conf` shows how to install the |ha| components and - configure them to introduce HA in the scenario +#. :ref:`rur-install` describes how to install the scenario proposed in + this page -#. :ref:`ha_promotion` introduces **habeat**, |product|'s python tool - to ensure automatic promotion of a Mesh Service in case the master - becomes unavailable +#. :ref:`rur-conf` shows how to install the |ur| Components and + configure them -#. :ref:`ha-storage` guides you in the creation of a centralised MinIO +#. :ref:`rur-storage` guides you in the creation of a centralised MinIO or S3 bucket -#. :ref:`ha-replica` provides a scripts to activate a Directory - Replica - -#. :ref:`ha-checks-scenario` contains a number of commands to check the status - of HA and related services. +#. :ref:`rur-checks-scenario` contains a number of commands to check + the status of |ur| and related services. .. note:: The parts must be executed in their entirety and in the order given to successfully complete the procedure and start using the |product| infrastructure in this scenario. We strongly suggest to look through the whole procedure to become -acquainted with the procedure. +acquainted with it and make sure you have no doubts before actually +starting the installation. -.. _ha-scenario: +.. _rur-scenario-overview: Scenario Overview ================= -To install a |ha| |carbonio| infrastructure, you need to ensure -redundancy for all critical services. +To install Scenario |rur| in a |carbonio| infrastructure, you need to +ensure redundancy for all critical services. -In a Carbonio HA setup, each Component except Monitoring is deployed -redundantly across multiple nodes. This setup guarantees continuous -service availability, even in the event of individual node +In a Carbonio |ur| setup, each Component except Monitoring is deployed +redundantly across multiple Nodes. This setup guarantees continuous +service availability, even in the event of individual Node failures. Below is the recommended Node distribution and configuration for each service to achieve redundancy and optimal performance, with centralised S3 storage. -The following table summarises the Node distribution and redundancy -requirements for each Carbonio service in a 5-node HA setup: - -.. _tab-ha-nodes: - -.. csv-table:: The Node distribution in the HA scenario described here. - :header: "**Service/Component**", **Primary Nodes**", "**Secondary** (Not full HA) **Nodes**", "**HA Nodes**", "**Total Nodes**" - :widths: 36, 16, 16, 16, 16 - - "**MTA**", "1", "", "1", "2" - "**Proxy**", "1", "", "1", "2" - "**Mailstore & Provisioning**", "1", "", "1", "2" - "**Cluster**", "3", "", "N/A", "3" - "**Files, Preview, and Docs**", "1", "", "1", "2" - "**Video**", "1", "1", "N/A", "2" - "**Chats**", "1", "1", "N/A", "2" - -Each service, except for the Cluster service, has a mirrored HA node, +Each service, except for the Cluster service, has a mirrored node, creating a reliable failover configuration. The **(Core) Cluster service** provides all the functionalities of a *Core Node* (Database, Mesh Server, and Directory Service) plus the Kafka and Zookeeper @@ -100,16 +81,16 @@ software, which provide high-reliability services used by |product|: stream-processing and distributed synchronisation of configuration information, respectively. The configuration of the Cluster service includes three nodes to maintain quorum and prevent split-brain -scenarios, ensuring stability in an HA environment. +scenarios, ensuring stability in the environment. -.. _ha-req: +.. _rur-req: Requirements ============ -- Each node must satisfy the overall :ref:`software-requirements` and :ref:`hw-requirements` +- Each Node must satisfy the overall :ref:`software-requirements` and :ref:`hw-requirements` -- To implement an HA |carbonio| infrastructure, load-balancers are required +- To implement a |rur| |carbonio| infrastructure, load-balancers are required in front of services that should be always available. Load-balancers are not included in |product|: an Open Source or commercial balancer can be used, with the requirement that it must support per-port TCP balancing. @@ -121,98 +102,80 @@ Requirements - An object storage like MinIO or S3 -- An additional carbonio-directory-server node configured in *MultiMaster* mode (**mmr**) +- An additional carbonio-directory-server Node configured in *MultiMaster* mode (**mmr**) -.. _ha-node-spec: +.. _rur-Node-spec: Detailed Node Specifications ---------------------------- -To meet HA requirements, each Node should meet the following +To meet |rur| requirements, each Component should meet the following recommended specifications: -.. list-table:: - :header-rows: 1 - :widths: 15 25 10 30 40 - - * - Nodes - - Component - - VM Count - - Purpose - - Configuration - * - MTA - - Mail Transfer Agent (MTA) - - 2 (1 primary + 1 HA) - - Ensures continuous mail transfer and reception, preventing downtime - - Both nodes are identically configured to handle failover, so if - one MTA node experiences an issue, the other seamlessly takes - over to maintain service continuity - * - Proxy - - Proxy - - 2 (1 primary + 1 HA) - - Manages incoming and outgoing client requests, providing - customers with consistent access to mail services - - Identical setup across both nodes enables a smooth transition - if the primary node fails, ensuring uninterrupted access - * - Mailstore - - Mailstore - - 2 (1 primary + 1 HA) - - Responsible for mailbox storage and retrieval, utilising - centralised S3 storage to ensure data availability - - Both nodes share S3 storage, ensuring real-time data - redundancy, so customer data is always accessible - * - Cluster - - Core Cluster Services (Postgres, Service Mesh Server, Directory Service, Kafka, and Zookeeper) - - 3 (for quorum maintenance) - - Manages core functions for cluster maintenance, including high - availability and distributed consensus - - A three-node setup prevents split-brain scenarios, ensuring - uninterrupted services by maintaining quorum even if one node - goes down - * - File/Preview/Docs - - File, Preview, Tasks and Document Management - - 2 (1 primary + 1 HA) - - Supports document handling, previews, and other file-related functions - - Redundant nodes ensure that document services are always - available, minimizing any impact from node failure - * - Video - - Video Services - - 2 (1 primary + 1 secondary) - - Supports video functionality for user communication - - Both nodes provide redundancy of video services - * - Chats - - Chats - - 2 (1 primary + 1 secondary) - - Supports chat functionality for user communication - - Both nodes provide redundancy of chat services - -.. warning:: Currently, the carbonio-message-broker and carbonio-message-dispatcher services - are not yet able to run in High Availability mode. - -.. _ha-storage-req: +.. csv-table:: + :header: "Component", "Purpose", "Configuration" + :widths: 20 40 40 + + "Mail Transfer Agent (MTA)", "Ensures continuous mail transfer and + reception, preventing downtime", "Both Nodes are identically + configured to handle failover, so if one MTA Node experiences an + issue, the other seamlessly takes over to maintain service + continuity" + "Proxy", "Manages incoming and outgoing client requests, providing + customers with consistent access to mail services", "Identical + setup across both Nodes enables a smooth transition if the primary + Node fails, ensuring uninterrupted access" + "Mailstore", "Responsible for mailbox storage and retrieval, + utilising centralised S3 storage to ensure continuous data + availability", "Both Nodes share S3 storage, ensuring real-time + data redundancy, so customer data is always accessible" + "Core Cluster Services [1]_", "Manage core functions for cluster + maintenance, including high availability and distributed + consensus", "A three-Node setup prevents split-brain scenarios, + ensuring uninterrupted services by maintaining quorum even if one + Node goes down" + "Files, Preview, Tasks, and Docs", "Supports document handling, + previews, and other file-related functions", "Redundant Nodes + ensure that document services are always available, minimizing any + impact from Node failure" + "Video Services", "Supports video functionality for user + communication", "Both Nodes provide redundancy of video services" + "Chats", "Supports chat functionality for communication between + users", "Both Nodes provide redundancy of chat services" + +.. [1] Core Cluster Services are Postgres, Service Mesh Server, + Directory Service, Kafka, and Zookeeper + +The following software installed on a |product| infrastructure do not +support redundancy, therefore only a single instance of them can be +installed and run at a time within the infrastructure: +``carbonio-message-broker`` and ``carbonio-message-dispatcher`` are +used internally by |product|, while the :command:`carbonio-certbot` +command is used to generate and renew the Let's Encrypt certificates. + +.. _rur-storage-req: Centralised S3 Storage Requirements ----------------------------------- - **Storage Performance**: A high-performance, centralized S3 storage - solution is crucial for Carbonio Mailstore nodes. The centralized + solution is crucial for Carbonio Mailstore Nodes. The centralized storage must be fast enough to handle real-time data retrieval and - storage across nodes, ensuring that data access times remain + storage across Nodes, ensuring that data access times remain consistent and efficient. - **Shared Access**: The S3 storage must be accessible to both Carbonio - Mailstore nodes, facilitating redundancy in data storage and - minimizing potential data loss in the event of a node failure. + Mailstore Nodes, facilitating redundancy in data storage and + minimizing potential data loss in the event of a Node failure. -.. _ha-checks-scenario: +.. _rur-checks-scenario: Pre-installation checks ======================= - The following is a list of essential pre-installation checks that you should carry out to ensure your setup is properly configured for a -|product| |ha| installation: +|product| |rur| installation: After all the software and hardware requirements are satisfied, here are some tasks to carry out before attempting the installation and a @@ -298,7 +261,7 @@ respectively. These will be used in the remainder of this section. the Primary storage mounted on :file:`/opt/` .. - * Cluster service (see :ref:`ha-scenario`) must have the root + * Cluster service (see :ref:`rur-scenario`) must have the root partition :file:`/` of the size specified in the sizing document shared with partner or customer:: @@ -338,9 +301,7 @@ respectively. These will be used in the remainder of this section. :hidden: :glob: - ha/standard-installation.rst - ha/ha-configuration.rst - ha/object-storage.rst - ha/account-promotion.rst - ha/activate-replica.rst - ha/checks-status.rst + redundantwithusermailreplica/standard-installation.rst + redundantwithusermailreplica/ur-configuration.rst + redundantwithusermailreplica/object-storage.rst + redundantwithusermailreplica/checks-status.rst diff --git a/source/carbonio/install/scenarios/single/manual.rst b/source/carbonio/install/scenarios/single/manual.rst index f58543edc..df5da9990 100644 --- a/source/carbonio/install/scenarios/single/manual.rst +++ b/source/carbonio/install/scenarios/single/manual.rst @@ -133,11 +133,9 @@ repositories. .. _n1-s7: -.. dropdown:: Step 7: Enable and configure ``memcached`` +.. dropdown:: Step 7: Enable ``memcached`` .. include:: /_includes/_installation/_components/memcached-enable.rst - - .. include:: /_includes/_installation/_components/memcached.rst .. _n1-s8: diff --git a/source/carbonio/monitor/ext_mon.rst b/source/carbonio/monitor/ext_mon.rst index a6abedcbd..b0c507b40 100644 --- a/source/carbonio/monitor/ext_mon.rst +++ b/source/carbonio/monitor/ext_mon.rst @@ -1,4 +1,4 @@ -Ports and paths useful for monitoring +Ports And Paths Useful For Monitoring ===================================== Effective monitoring is essential for maintaining the stability, security, @@ -106,7 +106,7 @@ to provide a comprehensive monitoring strategy. .. topic:: Component: Event streaming and other HA services - With Active Replica feature enabled the following are also necessary + With |ur| feature enabled the following are also necessary **Kafka** diff --git a/source/carbonio/playbook/carbonio-inventory-ha-complete b/source/carbonio/playbook/carbonio-inventory-ha-complete deleted file mode 100644 index 0c95a222f..000000000 --- a/source/carbonio/playbook/carbonio-inventory-ha-complete +++ /dev/null @@ -1,75 +0,0 @@ - -[kafka] -svc1.example.com broker_id=1 -svc2.example.com broker_id=2 -svc3.example.com broker_id=3 - -[zookeeper_servers] -svc1.example.com zookeeper_id=1 -svc2.example.com zookeeper_id=2 -svc3.example.com zookeeper_id=3 - -[postgresServers] -svc1.example.com postgres_version=16 patroni_role=primary -svc2.example.com postgres_version=16 patroni_role=secondary - -[masterDirectoryServers] -svc1.example.com ldap_role=master -svc2.example.com ldap_role=mmr - -[replicaDirectoryServers] - -[serviceDiscoverServers] -svc1.example.com -svc2.example.com -svc3.example.com - -[dbsConnectorServers] -mbox1.example.com -mbox2.example.com - -[mtaServers] -mta1.example.com -mta2.example.com - -[proxyServers] -proxy1.example.com -proxy2.example.con - -[proxyServers:vars] -#webmailHostname=webmailPublicHostname - -[applicationServers] -mbox1.example.com -mbox2.example.com - -[filesServers] -filesdocs1.example.com -filesdocs2.example.com - -[docsServers] -filesdocs1.example.com -filesdocs2.example.com - -[taskServers] -filesdocs1.example.com -filesdocs2.example.com - -[previewServers] -filesdocs1.example.com -filesdocs2.example.com - -[videoServers] -#hostname public_ip_address=x.y.z.t -video1.example.com public_ip_address=1.2.3.4 -video2.example.com public_ip_address=1.2.3.4 - -[workStreamServers] -wsc1.example.com -wsc2.example.com - -[prometheusServers] -svc3.example.com - -[syslogServer] -svc3.example.com diff --git a/source/carbonio/playbook/carbonio-inventory-fullredundant b/source/carbonio/playbook/carbonio-inventory-redundant similarity index 100% rename from source/carbonio/playbook/carbonio-inventory-fullredundant rename to source/carbonio/playbook/carbonio-inventory-redundant diff --git a/source/carbonio/playbook/carbonio-inventory-ha b/source/carbonio/playbook/carbonio-inventory-rur similarity index 100% rename from source/carbonio/playbook/carbonio-inventory-ha rename to source/carbonio/playbook/carbonio-inventory-rur diff --git a/source/carbonio/playbook/carbonio-inventory-rur-complete b/source/carbonio/playbook/carbonio-inventory-rur-complete new file mode 100644 index 000000000..303f48ead --- /dev/null +++ b/source/carbonio/playbook/carbonio-inventory-rur-complete @@ -0,0 +1,77 @@ +[kafka] +srv1.example.com broker_id=1 +srv2.example.com broker_id=2 +srv3.example.com broker_id=3 + +[zookeeper_servers] +srv1.example.com zookeeper_id=1 +srv2.example.com zookeeper_id=2 +srv3.example.com zookeeper_id=3 + +[postgresServers] +srv1.example.com postgres_version=16 patroni_role=primary +srv2.example.com postgres_version=16 patroni_role=secondary + +[masterDirectoryServers] +srv1.example.com ldap_role=master +srv3.example.com ldap_role=mmr + +[replicaDirectoryServers] +srv2.example.com + +[serviceDiscoverServers] +srv1.example.com +srv2.example.com +srv3.example.com + +[dbsConnectorServers] +srv8.example.com +srv9.example.com + +[mtaServers] +srv4.example.com +srv5.example.com + +[proxyServers] +srv6.example.com +srv7.example.com + +[proxyServers:vars] +webmailHostname=YourWebmailPublicHostname + +[applicationServers] +srv8.example.com +srv9.example.com + +[filesServers] +srv10.example.com +srv11.example.com + +[docsServers] +srv12.example.com +srv13.example.com + +[taskServers] +srv10.example.com +srv11.example.com + +[previewServers] +srv12.example.com +srv13.example.com + +### The IP address(es) might be the same, see section UDP Video +### Streaming + +[videoServers] +srv14.example.com public_ip_address=x.y.z.t +srv15.example.com public_ip_address=w.u.v.s + +[prometheusServers] +srv3.example.com + +[syslogServer] +srv3.example.com + +[workStreamServers] +srv10.example.com +srv11.example.com diff --git a/source/carbonio/postinstall/clamav-management.rst b/source/carbonio/postinstall/clamav-management.rst index e9364b557..4661cb8d4 100644 --- a/source/carbonio/postinstall/clamav-management.rst +++ b/source/carbonio/postinstall/clamav-management.rst @@ -13,6 +13,13 @@ ClamAV Signatures Updater .. include:: /_includes/_postinstallation/clamav-updater.rst +.. _clamav-conf-sigs: + +Configure Signatures Updater +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /_includes/_postinstallation/clamav-conf.rst + .. _clamav-remove-sigs: Remove Signatures Updater diff --git a/source/carbonio/report/toc.rst b/source/carbonio/report/toc.rst index 2a63d63e7..1159ce455 100644 --- a/source/carbonio/report/toc.rst +++ b/source/carbonio/report/toc.rst @@ -31,9 +31,83 @@ activities: Before You Open a Ticket ------------------------ -Before you open a Support Ticket, you should gather some information -and configuration values from |product| and its Components, to collect -the information that will be relevant for the Technical Support Team. +Before you open a Support Ticket, ensure that your |product| +infrastructure is fully updated, that is, it features the latest +version released and has the latest packages installed. If your +|product| infrastructure is equipped with a version of |product| older +than |version|, please follow the :ref:`appropriate upgrade procedure +`. If you already run the latest version, make sure +all the latest packages are installed by carrying out these two-step +procedure on each Node of your |product| infrastructure. + +.. rubric:: Step 1. Update package list. + +.. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + .. code:: console + + # apt update + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf check-update + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt update + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf check-update + +.. rubric:: Step 2. Install new packages, if any. + +.. tab-set:: + + .. tab-item:: Ubuntu 22.04 + :sync: ubu22 + + .. code:: console + + # apt upgrade + + .. tab-item:: RHEL 8 + :sync: rhel8 + + .. code:: console + + # dnf upgrade --best --allowerasing + + .. tab-item:: Ubuntu 24.04 + :sync: ubu24 + + .. code:: console + + # apt upgrade + + .. tab-item:: RHEL 9 + :sync: rhel9 + + .. code:: console + + # dnf upgrade --best --allowerasing + +After you updated your |product| infrastructure, you should gather +some information and configuration values from |product| and its +Components, to collect the information that will be relevant for the +Technical Support Team. This section focuses on how to retrieve this information: You need to provide existing log files and the output of some commands and scripts diff --git a/source/carbonio/troubleshooting/ldap.rst b/source/carbonio/troubleshooting/ldap.rst index 7c0def31d..62053f7e1 100644 --- a/source/carbonio/troubleshooting/ldap.rst +++ b/source/carbonio/troubleshooting/ldap.rst @@ -1,25 +1,24 @@ .. _ts-ds: ================== - Directory Server + |ds| ================== In this section you can find directions and suggestions to deal with -issues arising from the Directory Server. +issues arising from the |ds|. .. _ts-ds-credentials: Update Credentials ================== - In all cases when it is advisable to change the password of the -Directory Server, follow the steps in the procedure described here. +|ds|, follow the steps in the procedure described here. .. note:: The procedure requires CLI access; all the commands must be executed as the |zu|. -Update Password on Master Directory Server +Update Password on Master |ds| ------------------------------------------ We start by defining a robust password @@ -28,7 +27,7 @@ We start by defining a robust password zextras$ export newLdapPsw="aGoodPassword" -Then change all the Directory Server passwords. +Then change all the |ds| passwords. .. code:: console @@ -60,7 +59,7 @@ In case the |product| infrastructure includes the each Node featuring the Component. Define the password, which must be the same as the one on the -Master Directory Server: +Master |ds|: .. code:: console @@ -91,7 +90,7 @@ Finally, remove the saved password: zextras$ unset newLdapPsw -As a final check, ensure the LDAP replica is working: +As a final check, ensure the Directory Replica is working: .. code:: console @@ -101,13 +100,13 @@ Align all Other Nodes --------------------- Define the password, which must be the same as the one on the -Master Directory Server: +Master |ds|: .. code:: console zextras$ export newLdapPsw="aGoodPassword" -Then change all the Directory Server passwords. +Then change all the |ds| passwords. .. code:: console diff --git a/source/carbonio/troubleshooting/mesh.rst b/source/carbonio/troubleshooting/mesh.rst deleted file mode 100644 index a3e0bf22b..000000000 --- a/source/carbonio/troubleshooting/mesh.rst +++ /dev/null @@ -1,34 +0,0 @@ - -.. _ts-mesh: - -======================== - |mesh| -======================== - -.. include:: /_includes/_ts/mesh.rst - -.. _ar-ts: - -Active Replica -============================== - -When you set up :ref:`activereplica`, the following commands can prove -useful to verify the status of the service. - -.. rubric:: Verify Configuration - -.. code:: console - - zextras$ carbonio config get global brokers - -.. rubric:: Verify Endpoint Availability - -.. code:: console - - zextras$ carbonio ha test 10.0.10.11:9092,10.0.10.12:9092,10.0.10.13:9092 - -.. rubric:: Restart the HA service - -.. code:: console - - zextras$ carbonio ha doRestartService module diff --git a/source/carbonio/troubleshooting/toc.rst b/source/carbonio/troubleshooting/toc.rst index a3dcbb76d..6e0f5cf12 100644 --- a/source/carbonio/troubleshooting/toc.rst +++ b/source/carbonio/troubleshooting/toc.rst @@ -28,19 +28,6 @@ of them. upgrade - .. grid-item-card:: |mesh| - :columns: 12 12 6 6 - :class-title: sd-font-weight-bold sd-fs-4 - :link-type: ref - :link: ts-mesh - - |mesh| problems - - .. toctree:: - :hidden: - - mesh - .. grid-item-card:: Directory Server :columns: 12 12 6 6 :class-title: sd-font-weight-bold sd-fs-4 @@ -132,8 +119,11 @@ of them. services + .. grid-item:: + :columns: 3 3 3 3 + .. grid-item-card:: Common Issues - :columns: 12 12 6 6 + :columns: 6 6 6 6 :class-title: sd-font-weight-bold sd-fs-4 :link-type: ref :link: ts-generic diff --git a/source/carbonio/upgrade/toc.rst b/source/carbonio/upgrade/toc.rst index 93d5e3e13..ba660f583 100644 --- a/source/carbonio/upgrade/toc.rst +++ b/source/carbonio/upgrade/toc.rst @@ -1,3 +1,5 @@ +.. _upgrade-procedure: + ================================ Upgrade to |product| |release| ================================ diff --git a/source/carbonio/upgrade/upgrade-older.rst b/source/carbonio/upgrade/upgrade-older.rst index d9da64d08..935dc87dc 100644 --- a/source/carbonio/upgrade/upgrade-older.rst +++ b/source/carbonio/upgrade/upgrade-older.rst @@ -55,13 +55,22 @@ the upgrade: Checklist --------- -The following packages should be moved to different nodes during the -upgrade procedure: +#. The following packages should be moved to different nodes during the + upgrade procedure: + + * carbonio-user-management + * carbonio-storages + * carbonio-catalog + * carbonio-message-broker + +#. A new database for backups is added to |product|, so you will be + required to execute the following command during the upgrade of the + **Database Node** and then **reboot all the Nodes** + + .. code:: console + + # PGPASSWORD=$DB_ADM_PWD carbonio-mailbox-db-bootstrap carbonio_adm 127.0.0.1 -#. carbonio-user-management -#. carbonio-storages -#. carbonio-catalog -#. carbonio-message-broker Upgrade |product| ----------------- @@ -70,7 +79,10 @@ Upgrade |product| .. include:: /_includes/_upgrade/ds.rst -.. include:: /_includes/_upgrade/first-part-cb.rst +Remember to start the upgrade from the Node featuring the Directory +Server, then all the other Nodes in the same order of installation. + +.. include:: /_includes/_upgrade/first-part.rst .. grid:: 1 1 1 2 :gutter: 3 diff --git a/source/carbonio/upgrade/upgrade.rst b/source/carbonio/upgrade/upgrade.rst index 573753e99..110e448ec 100644 --- a/source/carbonio/upgrade/upgrade.rst +++ b/source/carbonio/upgrade/upgrade.rst @@ -38,9 +38,16 @@ next section. move some of the packages to the Database Component. The procedure to carry out this task can be found in Section :ref:`remove-pgpool`. -.. no checklist for this release - Checklist - --------- +Checklist +--------- + +#. A new database for backups is added to |product|, so you will be + required to execute the following command during the upgrade of the + **Database Node** and then **reboot all the Nodes** + + .. code:: console + + # PGPASSWORD=$DB_ADM_PWD carbonio-mailbox-db-bootstrap carbonio_adm 127.0.0.1 .. _up-proc: @@ -68,7 +75,9 @@ of Nodes, their load, the speed of network connection, and so on. .. include:: /_includes/_upgrade/ds.rst +Remember to start the upgrade from the Node featuring the Directory +Server, then all the other Nodes in the same order of installation. -.. include:: /_includes/_upgrade/first-part-cb.rst +.. include:: /_includes/_upgrade/first-part.rst .. include:: /_includes/_upgrade/second-part-cb.rst diff --git a/source/common/css/common.css b/source/common/css/common.css index 8c1611aa6..59d2f9e7b 100644 --- a/source/common/css/common.css +++ b/source/common/css/common.css @@ -238,3 +238,7 @@ button.copybtn { opacity: 1; color: var(--zx-color-flame) !important; } + +hr { + border: 1px solid var(--zx-color-carbon); +} diff --git a/source/common/replace.txt b/source/common/replace.txt index e616a8cee..5c434f872 100644 --- a/source/common/replace.txt +++ b/source/common/replace.txt @@ -24,6 +24,10 @@ .. |cwsc| replace:: |carbonio| Chats .. |wsc| replace:: Chats .. |wl| replace:: White-label +.. feature previously know as ha +.. |ur| replace:: User Mail Replica +.. |rur| replace:: Redundant with User Mail Replica +.. |ds| replace:: Directory Server .. common replacements for all products diff --git a/source/img/adminpanel/AP-landing-top.png b/source/img/adminpanel/AP-landing-top.png index 703b15951..5639dc3b0 100644 Binary files a/source/img/adminpanel/AP-landing-top.png and b/source/img/adminpanel/AP-landing-top.png differ diff --git a/source/img/adminpanel/new-account-details.png b/source/img/adminpanel/new-account-details.png index e310984f5..cc19f605e 100644 Binary files a/source/img/adminpanel/new-account-details.png and b/source/img/adminpanel/new-account-details.png differ diff --git a/source/img/carbonio/external.png b/source/img/carbonio/external.png deleted file mode 100644 index 1fb357174..000000000 Binary files a/source/img/carbonio/external.png and /dev/null differ diff --git a/source/img/carbonio/scenario-5-nodes-CE.png b/source/img/carbonio/scenario-5-nodes-CE.png index 01baf70f6..ded60c186 100644 Binary files a/source/img/carbonio/scenario-5-nodes-CE.png and b/source/img/carbonio/scenario-5-nodes-CE.png differ diff --git a/source/img/carbonio/scenario-fullsmall.png b/source/img/carbonio/scenario-fullsmall.png index 09ffd063f..024a7a570 100644 Binary files a/source/img/carbonio/scenario-fullsmall.png and b/source/img/carbonio/scenario-fullsmall.png differ diff --git a/source/img/carbonio/scenario-fullstandard.png b/source/img/carbonio/scenario-fullstandard.png index 7dd0b64ca..b3d48db73 100644 Binary files a/source/img/carbonio/scenario-fullstandard.png and b/source/img/carbonio/scenario-fullstandard.png differ diff --git a/source/img/carbonio/scenario-single-collaboration.png b/source/img/carbonio/scenario-single-collaboration.png index 2797fe326..aa4ec5cf2 100644 Binary files a/source/img/carbonio/scenario-single-collaboration.png and b/source/img/carbonio/scenario-single-collaboration.png differ diff --git a/source/img/carbonio/scenario-single-server-CE.png b/source/img/carbonio/scenario-single-server-CE.png new file mode 100644 index 000000000..cd7ced4e1 Binary files /dev/null and b/source/img/carbonio/scenario-single-server-CE.png differ diff --git a/source/img/carbonio/scenario-single-vs-ansible-with-optional-files-and-preview.png b/source/img/carbonio/scenario-single-vs-ansible-with-optional-files-and-preview.png index cfab6bc53..9b9fbc128 100644 Binary files a/source/img/carbonio/scenario-single-vs-ansible-with-optional-files-and-preview.png and b/source/img/carbonio/scenario-single-vs-ansible-with-optional-files-and-preview.png differ diff --git a/source/img/carbonio/scenario-single-vs.png b/source/img/carbonio/scenario-single-vs.png index 370f5aa9f..41a7fbf81 100644 Binary files a/source/img/carbonio/scenario-single-vs.png and b/source/img/carbonio/scenario-single-vs.png differ