Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/upgrade/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 11
sidebar_position: 12
sidebar_label: Troubleshooting
title: "Troubleshooting"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-1-2-to-v1-2-0.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 10
sidebar_position: 11
sidebar_label: Upgrade from v1.1.2 to v1.2.0 (not recommended)
title: "Upgrade from v1.1.2 to v1.2.0 (not recommended)"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-2-0-to-v1-2-1.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 9
sidebar_position: 10
sidebar_label: Upgrade from v1.1.2/v1.1.3/v1.2.0 to v1.2.1
title: "Upgrade from v1.1.2/v1.1.3/v1.2.0 to v1.2.1"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-2-1-to-v1-2-2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 8
sidebar_position: 9
sidebar_label: Upgrade from v1.2.1 to v1.2.2
title: "Upgrade from v1.2.1 to v1.2.2"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-2-2-to-v1-3-1.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 7
sidebar_position: 8
sidebar_label: Upgrade from v1.2.2/v1.3.0 to v1.3.1
title: "Upgrade from v1.2.2/v1.3.0 to v1.3.1"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-3-1-to-v1-3-2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 6
sidebar_position: 7
sidebar_label: Upgrade from v1.3.1 to v1.3.2
title: "Upgrade from v1.3.1 to v1.3.2"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-3-2-to-v1-4-0.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 5
sidebar_position: 6
sidebar_label: Upgrade from v1.3.2 to v1.4.0
title: "Upgrade from v1.3.2 to v1.4.0"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-4-0-to-v1-4-1.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 4
sidebar_position: 5
sidebar_label: Upgrade from v1.4.0 to v1.4.1
title: "Upgrade from v1.4.0 to v1.4.1"
---
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade/v1-4-1-to-v1-4-2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 4
sidebar_label: Upgrade from v1.4.1 to v1.4.2
title: "Upgrade from v1.4.1 to v1.4.2"
---
Expand Down
110 changes: 110 additions & 0 deletions docs/upgrade/v1-4-2-to-v1-4-3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
sidebar_position: 3
sidebar_label: Upgrade from v1.4.2 to v1.4.3
title: "Upgrade from v1.4.2 to v1.4.3"
---

<head>
<link rel="canonical" href="https://docs.harvesterhci.io/v1.4/upgrade/v1-4-2-to-v1-4-3"/>
</head>

## General information

An **Upgrade** button appears on the **Dashboard** screen whenever a new Harvester version that you can upgrade to becomes available. For more information, see [Start an upgrade](./automatic.md#start-an-upgrade).

For air-gapped environments, see [Prepare an air-gapped upgrade](./automatic.md#prepare-an-air-gapped-upgrade).

## Known issues

---

### 1. Air-gapped upgrade stuck with `ImagePullBackOff` error in Fluentd and Fluent Bit pods

The upgrade may become stuck at the very beginning of the process, as indicated by 0% progress and items marked **Pending** in the **Upgrade** dialog of the Harvester UI.

![](/img/v1.5/upgrade/upgrade-dialog-with-empty-status.png)

Specifically, Fluentd and Fluent Bit pods may become stuck in the `ImagePullBackOff` status. To check the status of the pods, run the following commands:

```bash
$ kubectl -n harvester-system get upgrades -l harvesterhci.io/latestUpgrade=true
NAME AGE
hvst-upgrade-x2hz8 7m14s

$ kubectl -n harvester-system get upgradelogs -l harvesterhci.io/upgrade=hvst-upgrade-x2hz8
NAME UPGRADE
hvst-upgrade-x2hz8-upgradelog hvst-upgrade-x2hz8

$ kubectl -n harvester-system get pods -l harvesterhci.io/upgradeLog=hvst-upgrade-x2hz8-upgradelog
NAME READY STATUS RESTARTS AGE
hvst-upgrade-x2hz8-upgradelog-downloader-6cdb864dd9-6bw98 1/1 Running 0 7m7s
hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-2nq7q 0/1 ImagePullBackOff 0 7m42s
hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-697wf 0/1 ImagePullBackOff 0 7m42s
hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-kd8kl 0/1 ImagePullBackOff 0 7m42s
hvst-upgrade-x2hz8-upgradelog-infra-fluentd-0 0/2 ImagePullBackOff 0 7m42s
```

This occurs because the following container images are neither preloaded in the cluster nodes nor pulled from the internet:

- `ghcr.io/kube-logging/fluentd:v1.15-ruby3`
- `ghcr.io/kube-logging/config-reloader:v0.0.5`
- `fluent/fluent-bit:2.1.8`

To fix the issue, perform any of the following actions:

- Update the Logging CR to use the images that are already preloaded in the cluster nodes. To do this, run the following commands against the cluster:

```bash
# Get the Logging CR names
OPERATOR_LOGGING_NAME=$(kubectl get loggings -l app.kubernetes.io/name=rancher-logging -o jsonpath="{.items[0].metadata.name}")
INFRA_LOGGING_NAME=$(kubectl get loggings -l harvesterhci.io/upgradeLogComponent=infra -o jsonpath="{.items[0].metadata.name}")

# Gather image info from operator's Logging CR
FLUENTD_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.image.repository}")
FLUENTD_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.image.tag}")

FLUENTBIT_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentbit.image.repository}")
FLUENTBIT_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentbit.image.tag}")

CONFIG_RELOADER_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.configReloaderImage.repository}")
CONFIG_RELOADER_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.configReloaderImage.tag}")

# Patch the Logging CR
kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentbit/image\",\"value\":{\"repository\":\"$FLUENTBIT_IMAGE_REPO\",\"tag\":\"$FLUENTBIT_IMAGE_TAG\"}}]"
kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentd/image\",\"value\":{\"repository\":\"$FLUENTD_IMAGE_REPO\",\"tag\":\"$FLUENTD_IMAGE_TAG\"}}]"
kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentd/configReloaderImage\",\"value\":{\"repository\":\"$CONFIG_RELOADER_IMAGE_REPO\",\"tag\":\"$CONFIG_RELOADER_IMAGE_TAG\"}}]"
```

The status of the Fluentd and Fluent Bit pods should change to `Running` in a moment and the upgrade process should continue after the Logging CR is updated. If the Fluentd pod is still in the `ImagePullBackOff` status, try deleting it with the following command to force it to restart:

```bash
UPGRADE_NAME=$(kubectl -n harvester-system get upgrades -l harvesterhci.io/latestUpgrade=true -o jsonpath='{.items[0].metadata.name}')
UPGRADELOG_NAME=$(kubectl -n harvester-system get upgradelogs -l harvesterhci.io/upgrade=$UPGRADE_NAME -o jsonpath='{.items[0].metadata.name}')

kubectl -n harvester-system delete pods -l harvesterhci.io/upgradeLog=$UPGRADELOG_NAME,harvesterhci.io/upgradeLogComponent=aggregator
```

- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following commands on each node:

```bash
# Pull down the three container images
docker pull ghcr.io/kube-logging/fluentd:v1.15-ruby3
docker pull ghcr.io/kube-logging/config-reloader:v0.0.5
docker pull fluent/fluent-bit:2.1.8

# Export the images to a tar file
docker save \
ghcr.io/kube-logging/fluentd:v1.15-ruby3 \
ghcr.io/kube-logging/config-reloader:v0.0.5 \
fluent/fluent-bit:2.1.8 > upgradelog-images.tar

# After transferring the tar file to the cluster nodes, import the images (need to be run on each node)
ctr -n k8s.io images import upgradelog-images.tar
```

The upgrade process should continue after the images are preloaded.

- (Not recommended) Restart the upgrade process with logging disabled. Ensure that the **Enable Logging** checkbox in the **Upgrade** dialog is not selected.

Related issues:
- [[BUG] AirGap Upgrades Seem Blocked with Fluentbit/FluentD](https://github.com/harvester/harvester/issues/7955)
2 changes: 1 addition & 1 deletion docs/upgrade/v1-4-2-to-v1-5-0.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ To fix the issue, perform any of the following actions:
kubectl -n harvester-system delete pods -l harvesterhci.io/upgradeLog=$UPGRADELOG_NAME,harvesterhci.io/upgradeLogComponent=aggregator
```

- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following comands on each node:
- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following commands on each node:

```bash
# Pull down the three container images
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v1.5/upgrade/v1-4-2-to-v1-5-0.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ To fix the issue, perform any of the following actions:
kubectl -n harvester-system delete pods -l harvesterhci.io/upgradeLog=$UPGRADELOG_NAME,harvesterhci.io/upgradeLogComponent=aggregator
```

- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following comands on each node:
- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following commands on each node:

```bash
# Pull down the three container images
Expand Down