Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
399 changes: 0 additions & 399 deletions docs/content/guides/v4tov5.md

This file was deleted.

31 changes: 1 addition & 30 deletions docs/content/installation/helm.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,36 +90,7 @@ PGO will check for updates upon startup and once every 24 hours. Any errors in c

For more information about collected data, see the Crunchy Data [collection notice](https://www.crunchydata.com/developers/data-collection-notice).

## Upgrade and Uninstall

Once PGO has been installed, it can then be upgraded using the `helm upgrade` command.
However, before running the `upgrade` command, any CustomResourceDefinitions (CRDs) must first be
manually updated (this is specifically due to a [design decision in Helm v3][helm-crd-limits],
in which any CRDs in the Helm chart are only applied when using the `helm install` command).

[helm-crd-limits]: https://helm.sh/docs/topics/charts/#limitations-on-crds

If you would like, before upgrading the CRDs, you can review the changes with
`kubectl diff`. They can be verbose, so a pager like `less` may be useful:

```shell
kubectl diff -f helm/install/crds | less
```

Use the following command to update the CRDs using
[server-side apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/)
_before_ running `helm upgrade`. The `--force-conflicts` flag tells Kubernetes that you recognize
Helm created the CRDs during `helm install`.

```shell
kubectl apply --server-side --force-conflicts -f helm/install/crds
```

Then, perform the upgrade using Helm:

```shell
helm upgrade <name> -n <namespace> helm/install
```
## Uninstall

To uninstall PGO, remove all your PostgresCluster objects, then use the `helm uninstall` command:

Expand Down
2 changes: 1 addition & 1 deletion docs/content/releases/5.0.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Read more about how you can [get started]({{< relref "quickstart/_index.md" >}})
- Refreshed the PostgresCluster CRD documentation using the latest version of `crdoc` (`v0.3.0`).
- The PGO test suite now includes a test to validate image pull secrets.
- Related Image functionality has been implemented for the OLM installer as required to support offline deployments.
- The name of the PGO Deployment and ServiceAccount has been changed to `pgo` for all installers, allowing both PGO v4.x and PGO v5.x to be run in the same namespace. If you are using Kustomize to install PGO and are upgrading from PGO 5.0.0, please see the [Upgrade Guide]({{< relref "../installation/upgrade.md" >}}) for addtional steps that must be completed as a result of this change in order to ensure a successful upgrade.
- The name of the PGO Deployment and ServiceAccount has been changed to `pgo` for all installers, allowing both PGO v4.x and PGO v5.x to be run in the same namespace. If you are using Kustomize to install PGO and are upgrading from PGO 5.0.0, please see the [Upgrade Guide]({{< relref "../upgrade/_index.md" >}}) for addtional steps that must be completed as a result of this change in order to ensure a successful upgrade.
- PGO now automatically detects whether or not it is running in an OpenShift environment.
- Postgres users and databases can be specified in `PostgresCluster.spec.users`. The credentials stored in the `{cluster}-pguser` Secret are still valid, but they are no longer reconciled. References to that Secret should be replaced with `{cluster}-pguser-{cluster}`. Once all references are updated, the old `{cluster}-pguser` Secret can be deleted.
- The built-in `postgres` superuser can now be managed the same way as other users. Specifying it in `PostgresCluster.spec.users` will give it a password, allowing it to connect over the network.
Expand Down
2 changes: 1 addition & 1 deletion docs/content/releases/5.0.3.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Read more about how you can [get started]({{< relref "quickstart/_index.md" >}})
- A [Pod Priority Class](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/) is configurable for the Pods created for a `PostgresCluster`.
- An `imagePullPolicy` can now be configured for Pods created for a `PostgresCluster`.
- Existing `PGDATA`, Write-Ahead Log (WAL) and pgBackRest repository volumes can now be migrated from PGO v4 to PGO v5 by specifying a `volumes` data source when creating a `PostgresCluster`.
- There is now a [migration guide available for moving Postgres clusters between PGO v4 to PGO v5]({{< relref "guides/v4tov5.md" >}}).
- There is now a [migration guide available for moving Postgres clusters between PGO v4 to PGO v5]({{< relref "upgrade/v4tov5/_index.md" >}}).
- The pgAudit extension is now enabled by default in all clusters.
- There is now additional validation for PVC definitions within the `PostgresCluster` spec to ensure successful PVC reconciliation.
- Postgres server certificates are now automatically reloaded when they change.
Expand Down
32 changes: 32 additions & 0 deletions docs/content/upgrade/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
title: "Upgrade"
date:
draft: false
weight: 33
---

# Overview

Upgrading to a new version of PGO is typically as simple as following the various installation
guides defined within the PGO documentation:

- [PGO Kustomize Install]({{< relref "./kustomize.md" >}})
- [PGO Helm Install]({{< relref "./helm.md" >}})

However, when upgrading to or from certain versions of PGO, extra steps may be required in order
to ensure a clean and successful upgrade.

This section provides detailed instructions for upgrading PGO 5.x using Kustomize or Helm, along with information for upgrading from PGO v4 to PGO v5.

{{% notice info %}}
Depending on version updates, upgrading PGO may automatically rollout changes to managed Postgres clusters. This could result in downtime--we cannot guarantee no interruption of service, though PGO attempts graceful incremental rollouts of affected pods, with the goal of zero downtime.
{{% /notice %}}

## Upgrading PGO 5.x

- [PGO Kustomize Upgrade]({{< relref "./kustomize.md" >}})
- [PGO Helm Upgrade]({{< relref "./helm.md" >}})

## Upgrading from PGO v4 to PGO v5

- [V4 to V5 Upgrade Methods]({{< relref "./v4tov5" >}})
35 changes: 35 additions & 0 deletions docs/content/upgrade/helm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
title: "Upgrading PGO v5 Using Helm"
date:
draft: false
weight: 70
---

Once PGO v5.0.x has been installed with Helm, it can then be upgraded using the `helm upgrade` command.
However, before running the `upgrade` command, any CustomResourceDefinitions (CRDs) must first be
manually updated (this is specifically due to a [design decision in Helm v3][helm-crd-limits],
in which any CRDs in the Helm chart are only applied when using the `helm install` command).

[helm-crd-limits]: https://helm.sh/docs/topics/charts/#limitations-on-crds

If you would like, before upgrading the CRDs, you can review the changes with
`kubectl diff`. They can be verbose, so a pager like `less` may be useful:

```shell
kubectl diff -f helm/install/crds | less
```

Use the following command to update the CRDs using
[server-side apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/)
_before_ running `helm upgrade`. The `--force-conflicts` flag tells Kubernetes that you recognize
Helm created the CRDs during `helm install`.

```shell
kubectl apply --server-side --force-conflicts -f helm/install/crds
```

Then, perform the upgrade using Helm:

```shell
helm upgrade <name> -n <namespace> helm/install
```
Original file line number Diff line number Diff line change
@@ -1,22 +1,10 @@
---
title: "Upgrade"
title: "Upgrading PGO v5 Using Kustomize"
date:
draft: false
weight: 50
---

# Overview

Upgrading to a new version of PGO is typically as simple as following the various installation
guides defined within the PGO documentation:

- [PGO Kustomize Install]({{< relref "./kustomize.md" >}})
- [PGO Helm Install]({{< relref "./helm.md" >}})

However, when upgrading to or from certain versions of PGO, extra steps may be required in order
to ensure a clean and successful upgrade. This page will therefore document any additional
steps that must be completed when upgrading PGO.

## Upgrading from PGO v5.0.0 Using Kustomize

Starting with PGO v5.0.1, both the Deployment and ServiceAccount created when installing PGO via
Expand Down Expand Up @@ -63,7 +51,7 @@ Additionally, please be sure to update and apply all PostgresCluster custom reso
with any applicable spec changes described in the
[PGO v5.0.3 release notes]({{< relref "../releases/5.0.3.md" >}}).

## Upgrading from PGO v5.0 to v5.1
## Upgrading from PGO v5.0.x to v5.1.x

Starting in PGO v5.1, new pgBackRest features available in version 2.38 are used
that impact both the `crunchy-postgres` and `crunchy-pgbackrest` images. For any
Expand Down
48 changes: 48 additions & 0 deletions docs/content/upgrade/v4tov5/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: "PGO v4 to PGO v5"
date:
draft: false
weight: 100
---

You can upgrade from PGO v4 to PGO v5 through a variety of methods by following this guide. There are several methods that can be used to upgrade: we present these methods based upon a variety of factors, including but not limited to:

- Redundancy / ability to roll back
- Available resources
- Downtime preferences

These methods include:

- [*Migrating Using Data Volumes*]({{< relref "./upgrade-method-1-data-volumes.md" >}}). This allows you to migrate from v4 to v5 using the existing data volumes that you created in v4. This is the simplest method for upgrade and is the most resource efficient, but you will have a greater potential for downtime using this method.
- [*Migrate From Backups*]({{< relref "./upgrade-method-2-backups.md" >}}). This allows you to create a Postgres cluster with v5 from the backups taken with v4. This provides a way for you to create a preview of your Postgres cluster through v5, but you would need to take your applications offline to ensure all the data is migrated.
- [*Migrate Using a Standby Cluster*]({{< relref "./upgrade-method-3-standby-cluster.md" >}}). This allows you to run a v4 and a v5 Postgres cluster in parallel, with data replicating from the v4 cluster to the v5 cluster. This method minimizes downtime and lets you preview your v5 environment, but is the most resource intensive.

You should choose the method that makes the most sense for your environment.

## Prerequisites

There are several prerequisites for using any of these upgrade methods.

- PGO v4 is currently installed within the Kubernetes cluster, and is actively managing any existing v4 PostgreSQL clusters.
- Any PGO v4 clusters being upgraded have been properly initialized using PGO v4, which means the v4 `pgcluster` custom resource should be in a `pgcluster Initialized` status:

```
$ kubectl get pgcluster hippo -o jsonpath='{ .status }'
{"message":"Cluster has been initialized","state":"pgcluster Initialized"}
```

- The PGO v4 `pgo` client is properly configured and available for use.
- PGO v5 is currently [installed]({{< relref "installation/_index.md" >}}) within the Kubernetes cluster.

For these examples, we will use a Postgres cluster named `hippo`.

## Additional Considerations

Upgrading to PGO v5 may result in a base image upgrade from EL-7 (UBI / CentOS) to EL-8
(UBI). Based on the contents of your Postgres database, you may need to perform
additional steps.

Due to changes in the GNU C library `glibc` in EL-8, you may need to reindex certain indexes in
your Postgres cluster. For more information, please read the
[PostgreSQL Wiki on Locale Data Changes](https://wiki.postgresql.org/wiki/Locale_data_changes), how
you can determine if your indexes are affected, and how to fix them.
109 changes: 109 additions & 0 deletions docs/content/upgrade/v4tov5/upgrade-method-1-data-volumes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
---
title: "Upgrade Method #1: Data Volumes"
date:
draft: false
weight: 10
---

{{% notice info %}}
Before attempting to upgrade from v4.x to v5, please familiarize yourself with the [prerequisites]({{< relref "upgrade/v4tov5/_index.md" >}}) applicable for all v4.x to v5 upgrade methods.
{{% /notice %}}

This upgrade method allows you to migrate from PGO v4 to PGO v5 using the existing data volumes that were created in PGO v4. Note that this is an "in place" migration method: this will immediately move your Postgres clusters from being managed by PGO v4 and PGO v5. If you wish to have some failsafes in place, please use one of the other migration methods. Please also note that you will need to perform the cluster upgrade in the same namespace as the original cluster in order for your v5 cluster to access the existing PVCs.

### Step 1: Prepare the PGO v4 Cluster for Migration

You will need to set up your PGO v4 Postgres cluster so that it can be migrated to a PGO v5 cluster. The following describes how to set up a PGO v4 cluster for using this migration method.

1. Scale down any existing replicas within the cluster. This will ensure that the primary PVC does not change again prior to the upgrade.

You can get a list of replicas using the `pgo scaledown --query` command, e.g.:
```
pgo scaledown hippo --query
```

If there are any replicas, you will see something similar to:

```
Cluster: hippo
REPLICA STATUS NODE ...
hippo running node01 ...
```

Scaledown any replicas that are running in this cluser, e.g.:

```
pgo scaledown hippo --target=hippo
```

2\. Once all replicas are removed and only the primary remains, proceed with deleting the cluster while retaining the data and backups. You can do this `--keep-data` and `--keep-backups` flags:

**You MUST run this command with the `--keep-data` and `--keep-backups` flag otherwise you risk deleting ALL of your data.**

```
pgo delete cluster hippo --keep-data --keep-backups
```

3\. The PVC for the primary Postgres instance and the pgBackRest repository should still remain. You can verify this with the command below:

```
kubectl get pvc --selector=pg-cluster=hippo
```

This should yield something similar to:

```
NAME STATUS VOLUME ...
hippo-jgut Bound pvc-a0b89bdb- ...
hippo-pgbr-repo Bound pvc-25501671- …
```

A third PVC used to store write-ahead logs (WAL) may also be present if external WAL volumes were enabled for the cluster.

### Step 2: Migrate to PGO v5

With the PGO v4 cluster's volumes prepared for the move to PGO v5, you can now create a [`PostgresCluster`]({{< relref "references/crd.md" >}}) custom resource using these volumes. This migration method does not carry over any specific configurations or customizations from PGO v4: you will need to create the specific `PostgresCluster` configuration that you need.

{{% notice warning %}}

Additional steps are required to set proper file permissions when using certain storage options,
such as NFS and HostPath storage, due to a known issue with how fsGroups are applied. When
migrating from PGO v4, this will require the user to manually set the group value of the pgBackRest
repo directory, and all subdirectories, to `26` to match the `postgres` group used in PGO v5.
Please see [here](https://github.com/kubernetes/examples/issues/260) for more information.

{{% /notice %}}

To complete the upgrade process, your `PostgresCluster` custom resource **MUST** include the following:

1\. A `volumes` data source that points to the PostgreSQL data, PostgreSQL WAL (if applicable) and pgBackRest repository PVCs identified in the `spec.dataSource.volumes` section.

For example, using the `hippo` cluster:

```
spec:
dataSource:
volumes:
pgDataVolume:
pvcName: hippo-jgut
directory: "hippo-jgut"
pgBackRestVolume:
pvcName: hippo-pgbr-repo
directory: "hippo-backrest-shared-repo"
# Only specify external WAL PVC if enabled in PGO v4 cluster. If enabled
# in v4, a WAL volume must be defined for the v5 cluster as well.
# pgWALVolume:
# pvcName: hippo-jgut-wal
```

Please see the [Data Migration]({{< relref "guides/data-migration.md" >}}) section of the [tutorial]({{< relref "tutorial/_index.md" >}}) for more details on how to properly populate this section of the spec when migrating from a PGO v4 cluster.

2\. If you customized Postgres parameters, you will need to ensure they match in the PGO v5 cluster. For more information, please review the tutorial on [customizing a Postgres cluster]({{< relref "tutorial/customize-cluster.md" >}}).

3\. Once the `PostgresCluster` spec is populated according to these guidelines, you can create the `PostgresCluster` custom resource. For example, if the `PostgresCluster` you're creating is a modified version of the [`postgres` example](https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/postgres) in the [PGO examples repo](https://github.com/CrunchyData/postgres-operator-examples), you can run the following command:

```
kubectl apply -k examples/postgrescluster
```

Your upgrade is now complete! You should now remove the `spec.dataSource.volumes` section from your `PostgresCluster`. For more information on how to use PGO v5, we recommend reading through the [PGO v5 tutorial]({{< relref "tutorial/_index.md" >}}).
Loading