Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
278 changes: 233 additions & 45 deletions advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,11 @@ Kubernetes cluster, with the following specifications:
information about how Cloud Native PostgreSQL relies on PostgreSQL replication,
including synchronous settings.

!!! Seealso "Connection Pooling"
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
information about how to take advantage of PgBouncer as a connection pooler,
and create an access layer between your applications and the PostgreSQL clusters.

## Read-write workloads

Applications can decide to connect to the PostgreSQL instance elected as
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,9 @@ product: 'Cloud Native Operator'
The operator can orchestrate a continuous backup infrastructure
that is based on the [Barman](https://pgbarman.org) tool. Instead
of using the classical architecture with a Barman server, which
backup many PostgreSQL instances, the operator will use the
`barman-cloud-wal-archive` and `barman-cloud-backup` tools.
backs up many PostgreSQL instances, the operator relies on the
`barman-cloud-wal-archive`, `barman-cloud-backup`, `barman-cloud-backup-list`,
and `barman-cloud-backup-delete` tools.
As a result, base backups will be *tarballs*. Both base backups and WAL files
can be compressed and encrypted.

Expand All @@ -17,17 +18,16 @@ You can use the image `quay.io/enterprisedb/postgresql` for this scope,
as it is composed of a community PostgreSQL image and the latest
`barman-cli-cloud` package.

!!! Important
Always ensure that you are running the latest version of the operands
in your system to take advantage of the improvements introduced in
Barman cloud (as well as improve the security aspects of your cluster).

A backup is performed from a primary or a designated primary instance in a
`Cluster` (please refer to
[replica clusters](replication.md#replication-from-an-external-postgresql-cluster)
for more information about designated primary instances).

!!! Warning
Cloud Native PostgreSQL does not currently manage the deletion of backup files
from the backup object store. The retention policy feature will be merged from
Barman to Barman Cloud in the future. For the time being, it is your responsibility
to configure retention policies directly on the object store.

## Cloud provider support

You can archive the backup files in any service that is supported
Expand Down Expand Up @@ -464,6 +464,7 @@ will use it unless you override it in the cluster configuration.

## Recovery

Cluster restores are not performed "in-place" on an existing cluster.
You can use the data uploaded to the object storage to bootstrap a
new cluster from a backup. The operator will orchestrate the recovery
process using the `barman-cloud-restore` tool.
Expand Down Expand Up @@ -540,4 +541,43 @@ manager running in the Pods.
You can optionally specify a `recoveryTarget` to perform a point in time
recovery. If left unspecified, the recovery will continue up to the latest
available WAL on the default target timeline (`current` for PostgreSQL up to
11, `latest` for version 12 and above).
11, `latest` for version 12 and above).

## Retention policies

Cloud Native PostgreSQL can manage the automated deletion of backup files from the backup object store, using **retention policies** based on recovery window.

Internally, the retention policy feature uses `barman-cloud-backup-delete`
with `--retention-policy “RECOVERY WINDOW OF {{ retention policy value }} {{ retention policy unit }}”`.

For example, you can define your backups with a retention policy of 30 days as
follows:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
[...]
spec:
backup:
barmanObjectStore:
destinationPath: "<destination path here>"
s3Credentials:
accessKeyId:
name: aws-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: aws-creds
key: ACCESS_SECRET_KEY
retentionPolicy: "30d"
```

!!! Note "There's more ..."
The **recovery window retention policy** is focused on the concept of
*Point of Recoverability* (`PoR`), a moving point in time determined by
`current time - recovery window`. The *first valid backup* is the first
available backup before `PoR` (in reverse chronological order).
Cloud Native PostgreSQL must ensure that we can recover the cluster at
any point in time between `PoR` and the latest successfully archived WAL
file, starting from the first valid backup. Base backups that are older
than the first valid backup will be marked as *obsolete* and permanently
removed after the next backup is completed.
76 changes: 58 additions & 18 deletions advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -183,10 +183,44 @@ relies on the superuser to reconcile the cluster with the desired status.
to the cluster.

The actual PostgreSQL data directory is created via an invocation of the
`initdb` PostgreSQL command. If you need to add custom options to that
command (i.e., to change the locale used for the template databases or to
add data checksums), you can add them to the `options` section like in
the following example:
`initdb` PostgreSQL command. If you need to add custom options to that command
(i.e., to change the `locale` used for the template databases or to add data
checksums), you can use the following parameters:

dataChecksums
: When `dataChecksums` is set to `true`, CNP invokes the `-k` option in
`initdb` to enable checksums on data pages and help detect corruption by the
I/O system - that would otherwise be silent (default: `false`).

encoding
: When `encoding` set to a value, CNP passes it to the `--encoding` option in `initdb`,
which selects the encoding of the template database (default: `UTF8`).

localeCollate
: When `localeCollate` is set to a value, CNP passes it to the `--lc-collate`
option in `initdb`. This option controls the collation order (`LC_COLLATE`
subcategory), as defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
from the PostgreSQL documentation (default: `C`).

localeCType
: When `localeCType` is set to a value, CNP passes it to the `--lc-ctype` option in
`initdb`. This option controls the collation order (`LC_CTYPE` subcategory), as
defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
from the PostgreSQL documentation (default: `C`).

walSegmentSize
: When `walSegmentSize` is set to a value, CNP passes it to the `--wal-segsize`
option in `initdb` (default: not set - defined by PostgreSQL as 16 megabytes).

!!! Note
The only two locale options that Cloud Native PostgreSQL implements during
the `initdb` bootstrap refer to the `LC_COLLATE` and `LC_TYPE` subcategories.
The remaining locale subcategories can be configured directly in the PostgreSQL
configuration, using the `lc_messages`, `lc_monetary`, `lc_numeric`, and
`lc_time` parameters.

The following example enables data checksums and sets the default encoding to
`LATIN1`:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
Expand All @@ -200,14 +234,19 @@ spec:
initdb:
database: app
owner: app
options:
- "-k"
- "--locale=en_US"
dataChecksums: true
encoding: 'LATIN1'
storage:
size: 1Gi
```

The user can also specify a custom list of queries that will be executed
Cloud Native PostgreSQL supports another way to customize the behaviour of the
`initdb` invocation, using the `options` subsection. However, given that there
are options that can break the behaviour of the operator (such as `--auth` or
`-d`), this technique is deprecated and will be removed from future versions of
the API.

You can also specify a custom list of queries that will be executed
once, just after the database is created and configured. These queries will
be executed as the *superuser* (`postgres`), connected to the `postgres`
database:
Expand All @@ -224,9 +263,9 @@ spec:
initdb:
database: app
owner: app
options:
- "-k"
- "--locale=en_US"
dataChecksums: true
localeCollate: 'en_US'
localeCType: 'en_US'
postInitSQL:
- CREATE ROLE angus
- CREATE ROLE malcolm
Expand All @@ -235,8 +274,9 @@ spec:
```

!!! Warning
Please use the `postInitSQL` option with extreme care as queries
are run as a superuser and can disrupt the entire cluster.
Please use the `postInitSQL` and `postInitTemplateSQL` options with extreme care,
as queries are run as a superuser and can disrupt the entire cluster.
An error in any of those queries interrupts the bootstrap phase, leaving the cluster incomplete.

### Compatibility Features

Expand Down Expand Up @@ -618,7 +658,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```

The following manifest creates a new PostgreSQL 14.0 cluster,
The following manifest creates a new PostgreSQL 14.1 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
Expand All @@ -633,7 +673,7 @@ metadata:
name: target-db
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:14.0
imageName: quay.io/enterprisedb/postgresql:14.1

bootstrap:
pg_basebackup:
Expand All @@ -653,7 +693,7 @@ spec:
```

All the requirements must be met for the clone operation to work, including
the same PostgreSQL version (in our case 14.0).
the same PostgreSQL version (in our case 14.1).

#### TLS certificate authentication

Expand All @@ -668,7 +708,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.

The manifest defines a new PostgreSQL 14.0 cluster called `cluster-clone-tls`,
The manifest defines a new PostgreSQL 14.1 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
Expand All @@ -683,7 +723,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:14.0
imageName: quay.io/enterprisedb/postgresql:14.1

bootstrap:
pg_basebackup:
Expand Down
57 changes: 41 additions & 16 deletions advocacy_docs/kubernetes/cloud_native_postgresql/cnp-plugin.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,20 @@ PostgreSQL Image: quay.io/enterprisedb/postgresql:13
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
Current Timeline: 2
Current WAL file: 00000002000000000000000A

Continuous Backup status
First Point of Recoverability: 2021-11-09T13:36:43Z
Working WAL archiving: OK
Last Archived WAL: 00000002000000000000000A @ 2021-11-09T13:47:28.354645Z

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
cluster-example-1 0/6000060 6927251808674721812 ✓ ✗ ✗ ✗
cluster-example-2 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
cluster-example-3 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-1 0/5000060 7027078108164751389 ✓ ✗ ✗ ✗ OK
1.10.0 cluster-example-2 0/5000060 0/5000060 7027078108164751389 ✗ ✓ ✗ ✗ OK
1.10.0 cluster-example-3 0/5000060 0/5000060 7027078108164751389 ✗ ✓ ✗ ✗ OK

```

Expand All @@ -65,47 +72,65 @@ PostgreSQL Image: quay.io/enterprisedb/postgresql:13
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
Current Timeline: 2
Current WAL file: 00000002000000000000000A

PostgreSQL Configuration
archive_command = '/controller/manager wal-archive %p'
archive_command = '/controller/manager wal-archive --log-destination /controller/log/postgres.json %p'
archive_mode = 'on'
archive_timeout = '5min'
cluster_name = 'cluster-example'
full_page_writes = 'on'
hot_standby = 'true'
listen_addresses = '*'
logging_collector = 'off'
log_destination = 'csvlog'
log_directory = '/controller/log'
log_filename = 'postgres'
log_rotation_age = '0'
log_rotation_size = '0'
log_truncate_on_rotation = 'false'
logging_collector = 'on'
max_parallel_workers = '32'
max_replication_slots = '32'
max_worker_processes = '32'
port = '5432'
shared_preload_libraries = ''
ssl = 'on'
ssl_ca_file = '/controller/certificates/client-ca.crt'
ssl_cert_file = '/controller/certificates/server.crt'
ssl_key_file = '/controller/certificates/server.key'
unix_socket_directories = '/var/run/postgresql'
unix_socket_directories = '/controller/run'
wal_keep_size = '512MB'
wal_level = 'logical'
wal_log_hints = 'on'

cnp.config_sha256 = '407239112913e96626722395d549abc78b2cf9b767471e1c8eac6f33132e789c'

PostgreSQL HBA Rules

# Grant local access
local all all peer
local all all peer map=local

# Require client certificate authentication for the streaming_replica user
hostssl postgres streaming_replica all cert
hostssl replication streaming_replica all cert
hostssl all cnp_pooler_pgbouncer all cert



# Otherwise use md5 authentication
# Otherwise use the default authentication method
host all all all md5

Continuous Backup status
First Point of Recoverability: 2021-11-09T13:36:43Z
Working WAL archiving: OK
Last Archived WAL: 00000002000000000000000A @ 2021-11-09T13:47:28.354645Z

Instances status
Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart
-------- ----------- ------------ ---------- --------- ------- ----------- ------------- ---------------
cluster-example-1 0/6000060 6927251808674721812 ✓ ✗ ✗ ✗
cluster-example-2 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
cluster-example-3 0/6000060 0/6000060 6927251808674721812 ✗ ✓ ✗ ✗
Manager Version Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status
--------------- -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------
1.10.0 cluster-example-1 0/5000060 7027078108164751389 ✓ ✗ ✗ ✗ OK
1.10.0 cluster-example-2 0/5000060 0/5000060 7027078108164751389 ✗ ✓ ✗ ✗ OK
1.10.0 cluster-example-3 0/5000060 0/5000060 7027078108164751389 ✗ ✓ ✗ ✗ OK
```

The command also supports output in `yaml` and `json` format.
Expand Down
Loading