Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions content/integrate/redis-data-integration/installation/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,26 @@ If there is an active pipeline, upgrade RDI on the active VM first.
This will cause a short pipeline downtime of up to two minutes.
Afterwards, upgrade RDI on the passive VM. This will not cause any downtime.

{{< warning >}}
When upgrading from RDI < 1.8.0 to RDI >= 1.8.0 in a VM HA setup, both RDI instances may incorrectly consider themselves active after the upgrade. This occurs because the upgrade process doesn't change the cluster id value from its default `cluster-1`, causing both clusters to assume they are the active cluster.

**Symptoms:**

- The upgraded passive node will start collector and processor components
- Collector may enter a crash loop as it fails to connect to the source
- Both clusters will restart in a loop

**Workaround:**

After upgrading, manually set a unique cluster ID for one of the installations (preferably on the passive instance):

1. Locate the RDI configuration file on the VM host. The file is typically located at `/etc/rdi/rdi-sys-config.yaml`.
2. Open the configuration file in a text editor. For example:

```bash
sudo nano /etc/rdi/rdi-sys-config.yaml
{{< /warning >}}

## Upgrading a Kubernetes installation

Follow the steps below to upgrade an existing
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,29 @@ The RDI operator has been significantly enhanced in the following areas:
## Limitations

RDI can write data to a Redis Active-Active database. However, it doesn't support writing data to two or more Active-Active replicas. Writing data from RDI to several Active-Active replicas could easily harm data integrity as RDI is not synchronous with the source database commits.

## Known Issues

### High Availability upgrade issue

When upgrading from RDI < 1.8.0 to RDI >= 1.8.0 in an HA setup, both RDI instances may incorrectly consider themselves active after the upgrade. This occurs because the upgrade process doesn't update the `rdi:ha:lock` value from the legacy `cluster-1` identifier, causing both clusters to assume they are the active cluster.

**Symptoms:**

- The upgraded passive node will start collector and processor components
- Collector may enter a crash loop as it fails to connect to the source
- Both clusters will restart in a loop

**Workaround:**

After upgrading, manually set a unique cluster ID for one of the installations by editing the configmap:

```bash
kubectl edit cm -n rdi rdi-sys-config
```

Then add the following line to distinguish between the clusters:

```bash
RDI_CLUSTER_ID: cluster-2
```