Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DRAFT] W-14847538 PCE 4.0 Documentation #327

Open
wants to merge 11 commits into
base: v3.2
Choose a base branch
from
2 changes: 1 addition & 1 deletion antora.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: private-cloud
title: Anypoint Platform Private Cloud Edition
version: '3.2'
version: '4.0'
nav:
- modules/ROOT/nav.adoc
8 changes: 2 additions & 6 deletions modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,11 @@
* xref:anypoint-monitoring.adoc[Anypoint Monitoring]
* xref:install-checklist.adoc[Product Prerequisites]
** xref:prereq-platform.adoc[Environment Prerequisites]
*** xref:prereq-aws-terraform.adoc[Amazon Web Services (AWS) Prerequisites]
** xref:prereq-hardware.adoc[Hardware Prerequisites]
** xref:prereq-software.adoc[Software Prerequisites]
** xref:prereq-network.adoc[Network Prerequisites]
** xref:verify-nfs.adoc[NFS Prerequisites]
*** xref:troubleshoot-nfs.adoc[Troubleshoot NFS Errors]
** xref:prereq-gravity-check.adoc[Preinstallation Verification]
* xref:install-workflow.adoc[Installation Workflow]
* xref:config-workflow.adoc[Configure Anypoint Platform PCE]
** xref:install-disable-local-user.adoc[Delete the Local User after Installing Anypoint Platform PCE]
Expand All @@ -20,15 +18,13 @@
** xref:access-management-dns.adoc[Configure DNS in Anypoint Platform PCE]
** xref:access-management-security.adoc[Configure Security in Anypoint Platform PCE]
** xref:access-management-disclaimer.adoc[Configure a Disclaimer in Anypoint Platform PCE]
** xref:access-management-license.adoc[Update License on Anypoint Platform PCE]
* xref:operating-about.adoc[Manage Anypoint Platform PCE]
** xref:backup-and-disaster-recovery.adoc[Configure Backup and Restore in Anypoint Platform PCE]
** xref:managing-via-the-ops-center.adoc[Ops Center]
*** xref:ops-center-update-lic.adoc[Add a Product License Using Ops Center]
*** xref:config-alerts.adoc[Configure Alerts for Anypoint Platform PCE]
** xref:config-alerts.adoc[Configure Alerts for Anypoint Platform PCE]
** xref:config-add-proxy-allowlist.adoc[Allow Addresses for API Platform Proxies]
** xref:custom-policies.adoc[Upload Custom Policies and Publish Assets to Exchange Using the Maven Client]
** xref:config-studio.adoc[Configure Anypoint Studio for Anypoint Platform PCE]
** xref:restarting-a-node.adoc[Update Nodes]
** xref:ext-analytics-elk.adoc[Analyze Business and API Data Using ELK]
** xref:register-server.adoc[Troubleshoot Mule Runtime Engine Registration]
** xref:license-influx-update.adoc[Update InfluxDB License]
Expand Down
18 changes: 18 additions & 0 deletions modules/ROOT/pages/access-management-license.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
= Update License on Anypoint Platform PCE
ifndef::env-site,env-github[]
include::_attributes.adoc[]
endif::[]

When your product license expires, Access Management displays a message stating that the license is expired. It continues to display this message until you upload a valid license.

To add a new Anypoint Platform Private Cloud Edition license:

. From Anypoint Platform, click Access Management.
. Click the *License* tab.
. In the *License* field, select Choose file to select the license on your local system.
. Click *Save*.


== See Also

* xref:license-influx-update.adoc[Update InfluxDB License]
6 changes: 3 additions & 3 deletions modules/ROOT/pages/anypoint-monitoring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

Anypoint Platform Private Cloud Edition (Anypoint Platform PCE) includes support for Anypoint Monitoring. Anypoint Monitoring is a suite of tools that provides feedback from Mule flows and components in your application network.

Anypoint Monitoring is an add-on to Anypoint Platform PCE. To use Anypoint Monitoring with Anypoint Platform PCE, contact your customer success representative.
Anypoint Monitoring is an add-on to Anypoint Platform PCE. To use Anypoint Monitoring with Anypoint Platform PCE, contact MuleSoft Professional Services.

Using Anypoint Monitoring with Anypoint Platform PCE requires the "Monitoring for On-Premises Management" subscription. This subscription entitles you to use two monitoring instances. A monitoring instance is an implementation of the 3-node Anypoint Monitoring configuration described in xref:supported-cluster-config.adoc[Supported Configurations for Anypoint Platform PCE]. Each monitoring instance is dedicated to a single Anypoint Platform PCE installation.

To use Anypoint Monitoring with more than two Anypoint Platform PCE installations, you must purchase additional monitoring subscriptions. Contact your customer success representative for more information.
To use Anypoint Monitoring with more than two Anypoint Platform PCE installations, you must purchase additional monitoring subscriptions. Contact MuleSoft Professional Services for more information.

== Supported Anypoint Monitoring Features

Expand Down Expand Up @@ -166,7 +166,7 @@ For information, see xref:prereq-hardware.adoc[Hardware Prerequisites].

== Backup and Restore

Anypoint Monitoring backup and restore are outside the normal gravity backup and restore mechanism, and require the additional procedures.
Anypoint Monitoring backup and restore are outside the normal PCE backup and restore mechanism, and require the additional procedures.
For information, see xref:backup-and-disaster-recovery.adoc[Configure Backup and Restore for Anypoint Platform PCE].

== Uninstall Anypoint Monitoring
Expand Down
116 changes: 66 additions & 50 deletions modules/ROOT/pages/backup-and-disaster-recovery.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,29 +9,31 @@ Anypoint Platform Private Cloud Edition (Anypoint Platform PCE) requires you to

[IMPORTANT]
====
Backup and restore for Anypoint Monitoring and Anypoint Visualizer are perfored outside the normal Gravity backup and restore mechanism, and require a separate procedure.
Backup and restore for Anypoint Monitoring and Anypoint Visualizer are performed outside the normal PCE backup and restore mechanism, and require a separate procedure.
====

== Create a Backup

To create a backup, run the following command on any node in the cluster:

. To create a backup, run the following command:
+
----
gravity backup /var/lib/gravity/planet/share/backup.tar.gz
curl -k https://<platformDns>/platform/backup \
-X POST \ -H "Authorization: Bearer $token" \
-H "Content-Type:application/json" \
-d '{"nfs-server": "<nfsServer>", "nfs-path": "<nfsPath>", "backup-file-name": "backup.tar.gz" }'
----

This command creates an archive of the current system state in: `/var/lib/gravity/planet/share/backup.tar.gz`. You can pass the following optional parameters to the gravity backup command:

* `--follow`
. This command creates an archive of the current system state in the provided NFS with the name: `backup.tar.gz`. It also outputs a JSON response with the status of the operation and the job name:
+
Outputs the backup job logs to stdout. In the following example, `tee` enables you to output to stdout and file at the same time:
----
gravity backup /var/lib/gravity/planet/share/backup.tar.gz --follow | tee -a /var/lib/gravity/planet/share/backup.log
{"success":"backup job triggered","jobName":"<anypoint-backup-job-id>"}
----

* `--timeout`
. To stream the backup job logs to stdout run:
+
Specifies the deadline for the backup job, for example, 30s or 5m. Default is 20m.
----
kubectl logs jobs/<anypoint-backup-job-id> -f
----

Backup contents:

Expand Down Expand Up @@ -66,34 +68,28 @@ To restore a system from your backup archive, use the original cluster or create
Some settings, including NFS, certificates, and DNS, are not backed up and must be configured for the environment in which the restore will be performed before running the restore.
====

[WARNING]
====
You cannot perform a backup between different versions of Anypoint Platform PCE. The new installation you create must use the same version of Anypoint Platform PCE as your backup.
====

. Log in to one of the nodes in your cluster.
. Move the compressed backup archive file to an NFS Server.

. Move the compressed backup archive file to a folder on any main node in the environment to be restored. You can transfer this file securely by using the following command:
. Restore the cluster from the backup archive:
+
----
scp /backup-path/to-restore.tar.gz your_username@remotehost.edu:/var/lib/gravity/planet/share
curl -k https://<platformDns>/platform/restore \
-X POST \ -H "Authorization: Bearer $token" \
-H "Content-Type:application/json" \
-d '{"nfs-server": "<nfsServer>", "nfs-path": "<nfsPath>", "backup-file-name": "backup.tar.gz" }'
----

. Restore the cluster from the backup archive:
. It outputs a JSON response with the status of the operation and the job name:
+
----
sudo gravity restore /var/lib/gravity/planet/share/to-restore.tar.gz
{"success":"restore job triggered","jobName":"<anypoint-restore-job-id>"}
----
+
You can pass the following optional parameters:
+
* `--follow`
+
Outputs the restore job logs to stdout.

* `--timeout`
. To stream the backup job logs to stdout:
+
Specifies the maximum time allowed for the restore operation, for example, 30s, 5m, and so on. The default value is `80m`. If you have a very large number of applications and files, use a minimum of `160m` as the `--timeout` value.
----
kubectl logs jobs/<anypoint-restore-job-id> -f
----

. Wait for the operation to complete, which is typically 40 to 60 minutes.
. Manually restore the NFS files if you are not reusing the original NFS.
Expand All @@ -102,42 +98,62 @@ Specifies the maximum time allowed for the restore operation, for example, 30s,

If Anypoint Monitoring and Anypoint Visualizer are enabled, follow these procedures to back up and restore them.

Anypoint Monitoring and Anypoint Visualizer backup and restore are outside the normal Gravity backup and restore mechanism, and require additional procedures. These components store metrics and other information, but they do not store platform configuration. Platform configuration information is handled by the system backup and restore procedures.
Anypoint Monitoring and Anypoint Visualizer backup and restore are outside the normal PCE backup and restore mechanism, and require additional procedures. These components store metrics and other information, but they do not store platform configuration. Platform configuration information is handled by the system backup and restore procedures.

Anypoint Monitoring and Anypoint Visualizer must back up the data of the component InfluxDB (`dias-prov-k8s-am-influxdb-comp`). Anypoint Monitoring and Anypoint Visualizer both store their configuration data in a Postgres database. The default Gravity mechanism backs up and restores this database.
Anypoint Monitoring and Anypoint Visualizer must back up the data of the component InfluxDB (`dias-prov-k8s-am-influxdb-comp`). Anypoint Monitoring and Anypoint Visualizer both store their configuration data in a Postgres database. The default PCE mechanism backs up and restores this database.

Anypoint Monitoring and Anypoint Visualizer requires 4 TB volumes for `amv` nodes. A full InfluxDB backup can be up to 8 TB of data. (Only two data nodes are operative.) You must provide enough free space on the target directory for the entire backup.
Anypoint Monitoring and Anypoint Visualizer requires 4 TB volumes for `amv` nodes. A full InfluxDB backup can be up to 8 TB of data. (Only two data nodes are operative.) You must provide enough free space on the target NFS server for the entire backup.

Restore time is proportional to the size of the backup. During restore, existing data on the target cluster will be backed up and erased upon user approval.

The backup and restore script must be run on a main node and must have `sudo` access because it uses `kubectl`. The script validates the restore disk size, databases, measurements, retention policies, and series cardinality for all measurements in each database.
The job validates the restore disk size, databases, measurements, retention policies, and series cardinality for all measurements in each database.

=== Procedure

The InfluxDB backup and restore script is in the Anypoint Platform PCE environment: `/var/lib/gravity/site/packages/unpacked/gravitational.io/anypoint/3.2.0*/resources/kubernetes/amv-backup-restore/amv-backup-restore.sh`
==== AMV Backup
. Run the amv backup operation
+
----
curl -k https://<platformDns>/platform/amv/backup \
-X POST \ -H "Authorization: Bearer $token" \
-H "Content-Type:application/json" \
-d '{"nfs-server": "<nfsServer>", "nfs-path": "<nfsPath>", "backup-file-name": "amv_backup.tar.gz" }'
----
. It outputs a JSON response with the status of the operation and the job name:
+
----
{"success":"backup job triggered","jobName":"<amv-backup-job-id>"}
----
. To stream the backup job logs to stdout:
+
----
kubectl logs jobs/<amv-backup-job-id> -f
----

The script has these parameters:
==== AMV Restore
. Move the compressed backup archive file to an NFS Server.

. Restore the cluster from the backup archive:
+
----
-a : (Required) action to perform : backup or restore. [string].
Default NO DEFAULT ACTION.
-va : (optional) verbose output.
Default non-verbose.
-f : (Required) name of the tar file.
Default for backup : dias-prov-k8s-am-influxdb-comp_YYYYMMD. [string].
Default for restore : /var/lib/data/influxdb/backup/dias-prov-k8s-am-influxdb-comp_YYYYMMDD.tar.gz. [Full Path].
-d : (Required for backup) full path to location where backup tar file is stored.
Default for backup : /var/lib/data/influxdb/backup. [Full Path].
Default for restore : same as tar file path. [Full Path].
-s : (optional) Optional sub-component filter, use only for backup of only meta or only data.
-h : Print HELP.
curl -k https://<platformDns>/platform/amv/restore \
-X POST \ -H "Authorization: Bearer $token" \
-H "Content-Type:application/json" \
-d '{"nfs-server": "<nfsServer>", "nfs-path": "<nfsPath>", "backup-file-name": "amv_backup.tar.gz" }'
----

=== Examples
. It outputs a JSON response with the status of the operation and the job name:
+
----
{"success":"restore job triggered","jobName":"<amv-restore-job-id>"}
----

Verbose backup: `sudo bash amv-backup-restore.sh -va backup -f 2020-01-01-influxdb-backup -d path/to/backup/dir`
. To stream the backup job logs to stdout:
+
----
kubectl logs jobs/<amv-restore-job-id> -f
----

Non-verbose restore: `sudo bash amv-backup-restore.sh -a restore -f /var/lib/data/influxdb/backup/dias-prov-k8s-am-influxdb-comp_YYYYMMDD.tar.gz`

== See Also

Expand Down
38 changes: 6 additions & 32 deletions modules/ROOT/pages/config-add-proxy-allowlist.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,33 +3,7 @@ ifndef::env-site,env-github[]
include::_attributes.adoc[]
endif::[]

By default, proxy requests are enabled for the domain name where the platform runs.

NOTE: For versions 3.2.2 and later, refer to Trusted Domains.

To allow addresses in versions 3.2.0 and 3.2.1:

. Log in to Ops Center and select *Kubernetes* on the left side-bar.
. Select the `api-manager` namespace from the drop-down menu on the top, and then select *Configmaps*. Find the `api-platform-web-env` configmap and then click on `Edit Config Map`.
. Select the *APW_FEATURES_PROXYWHITELIST* tab and ensure that value is set to `true`. (You'll have to hover your mouse over the tabs to get a tooltip with the full name to find the right tab)
+
If not, set it to `true` and then select `Apply`.
. Select the *APW_PROXYWHITELIST_RULES* tab.
. Write your rules separated by commas.
+
The following example defines regular expressions that allow requests to be made to the `*.somewhere.com/*` and `*.somewhereelse.com/*` domains, where `*` is any part of a DNS name or URL:

----
.*\.somewhere\.com,.*\.somewhereelse\.com
----

. Select *Apply* to save changes to the `api-platform-web-env` config map.
. Re-create the pods to ensure that each node in the cluster uses the most recent configuration:
+
----
kubectl delete pod -n api-manager -l component=api-platform-web
----

By default, proxy requests are enabled for the domain name where the platform runs.

== Trusted Domains

Expand Down Expand Up @@ -72,11 +46,11 @@ API Console Proxy continues to be unauthenticated but it allows unauthenticated

To change the authentication method from strict to non-strict, follow these steps:

. Log in to Ops Center and select *Kubernetes* on the left side-bar.
. Select the `api-console-proxy` namespace from the drop-down menu on the top, and select *Configmaps*.
. Locate the `service-env` configmap and then click on *Edit Config Map*.
. Select the *STRICT_AUTHENTICATION_FF* tab and ensure that the value is set to `false`. (To find the right tab, hover your mouse over the tabs to get a tooltip with the full name.)
. Click *Apply* to save changes to the `service-env` config map.
. Run the following command:
+
----
kubectl patch cm service-env -n api-console-proxy --type merge -p '{"data":{"STRICT_AUTHENTICATION_FF":"false"}}'
----
. Recreate the pods to ensure that each node in the cluster uses the most recent configuration:
+
----
Expand Down