diff --git a/docs/upgrade/troubleshooting.md b/docs/upgrade/troubleshooting.md index 724fbac2138..d517c233cbb 100644 --- a/docs/upgrade/troubleshooting.md +++ b/docs/upgrade/troubleshooting.md @@ -1,5 +1,5 @@ --- -sidebar_position: 11 +sidebar_position: 12 sidebar_label: Troubleshooting title: "Troubleshooting" --- diff --git a/docs/upgrade/v1-1-2-to-v1-2-0.md b/docs/upgrade/v1-1-2-to-v1-2-0.md index cbdb0ca9c8f..e6fa90a6df8 100644 --- a/docs/upgrade/v1-1-2-to-v1-2-0.md +++ b/docs/upgrade/v1-1-2-to-v1-2-0.md @@ -1,5 +1,5 @@ --- -sidebar_position: 10 +sidebar_position: 11 sidebar_label: Upgrade from v1.1.2 to v1.2.0 (not recommended) title: "Upgrade from v1.1.2 to v1.2.0 (not recommended)" --- diff --git a/docs/upgrade/v1-2-0-to-v1-2-1.md b/docs/upgrade/v1-2-0-to-v1-2-1.md index 698210377d6..f0b0d568ff9 100644 --- a/docs/upgrade/v1-2-0-to-v1-2-1.md +++ b/docs/upgrade/v1-2-0-to-v1-2-1.md @@ -1,5 +1,5 @@ --- -sidebar_position: 9 +sidebar_position: 10 sidebar_label: Upgrade from v1.1.2/v1.1.3/v1.2.0 to v1.2.1 title: "Upgrade from v1.1.2/v1.1.3/v1.2.0 to v1.2.1" --- diff --git a/docs/upgrade/v1-2-1-to-v1-2-2.md b/docs/upgrade/v1-2-1-to-v1-2-2.md index 076fb1c934a..a99b69b8437 100644 --- a/docs/upgrade/v1-2-1-to-v1-2-2.md +++ b/docs/upgrade/v1-2-1-to-v1-2-2.md @@ -1,5 +1,5 @@ --- -sidebar_position: 8 +sidebar_position: 9 sidebar_label: Upgrade from v1.2.1 to v1.2.2 title: "Upgrade from v1.2.1 to v1.2.2" --- diff --git a/docs/upgrade/v1-2-2-to-v1-3-1.md b/docs/upgrade/v1-2-2-to-v1-3-1.md index 58a0912756e..0cc5069ba0d 100644 --- a/docs/upgrade/v1-2-2-to-v1-3-1.md +++ b/docs/upgrade/v1-2-2-to-v1-3-1.md @@ -1,5 +1,5 @@ --- -sidebar_position: 7 +sidebar_position: 8 sidebar_label: Upgrade from v1.2.2/v1.3.0 to v1.3.1 title: "Upgrade from v1.2.2/v1.3.0 to v1.3.1" --- diff --git a/docs/upgrade/v1-3-1-to-v1-3-2.md b/docs/upgrade/v1-3-1-to-v1-3-2.md index 4f186dea916..cda6b81ed4f 100644 --- a/docs/upgrade/v1-3-1-to-v1-3-2.md +++ b/docs/upgrade/v1-3-1-to-v1-3-2.md @@ -1,5 +1,5 @@ --- -sidebar_position: 6 +sidebar_position: 7 sidebar_label: Upgrade from v1.3.1 to v1.3.2 title: "Upgrade from v1.3.1 to v1.3.2" --- diff --git a/docs/upgrade/v1-3-2-to-v1-4-0.md b/docs/upgrade/v1-3-2-to-v1-4-0.md index 4f80f0c6f1e..3a9b7f546a5 100644 --- a/docs/upgrade/v1-3-2-to-v1-4-0.md +++ b/docs/upgrade/v1-3-2-to-v1-4-0.md @@ -1,5 +1,5 @@ --- -sidebar_position: 5 +sidebar_position: 6 sidebar_label: Upgrade from v1.3.2 to v1.4.0 title: "Upgrade from v1.3.2 to v1.4.0" --- diff --git a/docs/upgrade/v1-4-0-to-v1-4-1.md b/docs/upgrade/v1-4-0-to-v1-4-1.md index 427a3ab7db8..5fc9ab20be2 100644 --- a/docs/upgrade/v1-4-0-to-v1-4-1.md +++ b/docs/upgrade/v1-4-0-to-v1-4-1.md @@ -1,5 +1,5 @@ --- -sidebar_position: 4 +sidebar_position: 5 sidebar_label: Upgrade from v1.4.0 to v1.4.1 title: "Upgrade from v1.4.0 to v1.4.1" --- diff --git a/docs/upgrade/v1-4-1-to-v1-4-2.md b/docs/upgrade/v1-4-1-to-v1-4-2.md index 97b0d41114a..6b5c00e2c68 100644 --- a/docs/upgrade/v1-4-1-to-v1-4-2.md +++ b/docs/upgrade/v1-4-1-to-v1-4-2.md @@ -1,5 +1,5 @@ --- -sidebar_position: 3 +sidebar_position: 4 sidebar_label: Upgrade from v1.4.1 to v1.4.2 title: "Upgrade from v1.4.1 to v1.4.2" --- diff --git a/docs/upgrade/v1-4-2-to-v1-4-3.md b/docs/upgrade/v1-4-2-to-v1-4-3.md new file mode 100644 index 00000000000..bcdb08a66ce --- /dev/null +++ b/docs/upgrade/v1-4-2-to-v1-4-3.md @@ -0,0 +1,110 @@ +--- +sidebar_position: 3 +sidebar_label: Upgrade from v1.4.2 to v1.4.3 +title: "Upgrade from v1.4.2 to v1.4.3" +--- + +
+ + + +## General information + +An **Upgrade** button appears on the **Dashboard** screen whenever a new Harvester version that you can upgrade to becomes available. For more information, see [Start an upgrade](./automatic.md#start-an-upgrade). + +For air-gapped environments, see [Prepare an air-gapped upgrade](./automatic.md#prepare-an-air-gapped-upgrade). + +## Known issues + +--- + +### 1. Air-gapped upgrade stuck with `ImagePullBackOff` error in Fluentd and Fluent Bit pods + +The upgrade may become stuck at the very beginning of the process, as indicated by 0% progress and items marked **Pending** in the **Upgrade** dialog of the Harvester UI. + + + +Specifically, Fluentd and Fluent Bit pods may become stuck in the `ImagePullBackOff` status. To check the status of the pods, run the following commands: + +```bash +$ kubectl -n harvester-system get upgrades -l harvesterhci.io/latestUpgrade=true +NAME AGE +hvst-upgrade-x2hz8 7m14s + +$ kubectl -n harvester-system get upgradelogs -l harvesterhci.io/upgrade=hvst-upgrade-x2hz8 +NAME UPGRADE +hvst-upgrade-x2hz8-upgradelog hvst-upgrade-x2hz8 + +$ kubectl -n harvester-system get pods -l harvesterhci.io/upgradeLog=hvst-upgrade-x2hz8-upgradelog +NAME READY STATUS RESTARTS AGE +hvst-upgrade-x2hz8-upgradelog-downloader-6cdb864dd9-6bw98 1/1 Running 0 7m7s +hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-2nq7q 0/1 ImagePullBackOff 0 7m42s +hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-697wf 0/1 ImagePullBackOff 0 7m42s +hvst-upgrade-x2hz8-upgradelog-infra-fluentbit-kd8kl 0/1 ImagePullBackOff 0 7m42s +hvst-upgrade-x2hz8-upgradelog-infra-fluentd-0 0/2 ImagePullBackOff 0 7m42s +``` + +This occurs because the following container images are neither preloaded in the cluster nodes nor pulled from the internet: + +- `ghcr.io/kube-logging/fluentd:v1.15-ruby3` +- `ghcr.io/kube-logging/config-reloader:v0.0.5` +- `fluent/fluent-bit:2.1.8` + +To fix the issue, perform any of the following actions: + +- Update the Logging CR to use the images that are already preloaded in the cluster nodes. To do this, run the following commands against the cluster: + + ```bash + # Get the Logging CR names + OPERATOR_LOGGING_NAME=$(kubectl get loggings -l app.kubernetes.io/name=rancher-logging -o jsonpath="{.items[0].metadata.name}") + INFRA_LOGGING_NAME=$(kubectl get loggings -l harvesterhci.io/upgradeLogComponent=infra -o jsonpath="{.items[0].metadata.name}") + + # Gather image info from operator's Logging CR + FLUENTD_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.image.repository}") + FLUENTD_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.image.tag}") + + FLUENTBIT_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentbit.image.repository}") + FLUENTBIT_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentbit.image.tag}") + + CONFIG_RELOADER_IMAGE_REPO=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.configReloaderImage.repository}") + CONFIG_RELOADER_IMAGE_TAG=$(kubectl get loggings $OPERATOR_LOGGING_NAME -o jsonpath="{.spec.fluentd.configReloaderImage.tag}") + + # Patch the Logging CR + kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentbit/image\",\"value\":{\"repository\":\"$FLUENTBIT_IMAGE_REPO\",\"tag\":\"$FLUENTBIT_IMAGE_TAG\"}}]" + kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentd/image\",\"value\":{\"repository\":\"$FLUENTD_IMAGE_REPO\",\"tag\":\"$FLUENTD_IMAGE_TAG\"}}]" + kubectl patch logging $INFRA_LOGGING_NAME --type=json -p="[{\"op\":\"replace\",\"path\":\"/spec/fluentd/configReloaderImage\",\"value\":{\"repository\":\"$CONFIG_RELOADER_IMAGE_REPO\",\"tag\":\"$CONFIG_RELOADER_IMAGE_TAG\"}}]" + ``` + + The status of the Fluentd and Fluent Bit pods should change to `Running` in a moment and the upgrade process should continue after the Logging CR is updated. If the Fluentd pod is still in the `ImagePullBackOff` status, try deleting it with the following command to force it to restart: + + ```bash + UPGRADE_NAME=$(kubectl -n harvester-system get upgrades -l harvesterhci.io/latestUpgrade=true -o jsonpath='{.items[0].metadata.name}') + UPGRADELOG_NAME=$(kubectl -n harvester-system get upgradelogs -l harvesterhci.io/upgrade=$UPGRADE_NAME -o jsonpath='{.items[0].metadata.name}') + + kubectl -n harvester-system delete pods -l harvesterhci.io/upgradeLog=$UPGRADELOG_NAME,harvesterhci.io/upgradeLogComponent=aggregator + ``` + +- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following commands on each node: + + ```bash + # Pull down the three container images + docker pull ghcr.io/kube-logging/fluentd:v1.15-ruby3 + docker pull ghcr.io/kube-logging/config-reloader:v0.0.5 + docker pull fluent/fluent-bit:2.1.8 + + # Export the images to a tar file + docker save \ + ghcr.io/kube-logging/fluentd:v1.15-ruby3 \ + ghcr.io/kube-logging/config-reloader:v0.0.5 \ + fluent/fluent-bit:2.1.8 > upgradelog-images.tar + + # After transferring the tar file to the cluster nodes, import the images (need to be run on each node) + ctr -n k8s.io images import upgradelog-images.tar + ``` + + The upgrade process should continue after the images are preloaded. + +- (Not recommended) Restart the upgrade process with logging disabled. Ensure that the **Enable Logging** checkbox in the **Upgrade** dialog is not selected. + +Related issues: +- [[BUG] AirGap Upgrades Seem Blocked with Fluentbit/FluentD](https://github.com/harvester/harvester/issues/7955) diff --git a/docs/upgrade/v1-4-2-to-v1-5-0.md b/docs/upgrade/v1-4-2-to-v1-5-0.md index 637e7564400..98ea6eab8a4 100644 --- a/docs/upgrade/v1-4-2-to-v1-5-0.md +++ b/docs/upgrade/v1-4-2-to-v1-5-0.md @@ -112,7 +112,7 @@ To fix the issue, perform any of the following actions: kubectl -n harvester-system delete pods -l harvesterhci.io/upgradeLog=$UPGRADELOG_NAME,harvesterhci.io/upgradeLogComponent=aggregator ``` -- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following comands on each node: +- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following commands on each node: ```bash # Pull down the three container images diff --git a/versioned_docs/version-v1.5/upgrade/v1-4-2-to-v1-5-0.md b/versioned_docs/version-v1.5/upgrade/v1-4-2-to-v1-5-0.md index 637e7564400..98ea6eab8a4 100644 --- a/versioned_docs/version-v1.5/upgrade/v1-4-2-to-v1-5-0.md +++ b/versioned_docs/version-v1.5/upgrade/v1-4-2-to-v1-5-0.md @@ -112,7 +112,7 @@ To fix the issue, perform any of the following actions: kubectl -n harvester-system delete pods -l harvesterhci.io/upgradeLog=$UPGRADELOG_NAME,harvesterhci.io/upgradeLogComponent=aggregator ``` -- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following comands on each node: +- On a computer with internet access, pull the required container images and then export them to a TAR file. Next, transfer the TAR file to the cluster nodes and then import the images by running the following commands on each node: ```bash # Pull down the three container images