diff --git a/public/docs/i/1000/tasks/images/cancel-task-audit.webp b/public/docs/i/1000/tasks/images/cancel-task-audit.webp new file mode 100644 index 0000000000..bc7c105268 Binary files /dev/null and b/public/docs/i/1000/tasks/images/cancel-task-audit.webp differ diff --git a/public/docs/i/1000/tasks/images/cancel-task-settings.webp b/public/docs/i/1000/tasks/images/cancel-task-settings.webp new file mode 100644 index 0000000000..dd177d0cfb Binary files /dev/null and b/public/docs/i/1000/tasks/images/cancel-task-settings.webp differ diff --git a/public/docs/i/2000/tasks/images/cancel-task-audit.webp b/public/docs/i/2000/tasks/images/cancel-task-audit.webp new file mode 100644 index 0000000000..ab17bd71b5 Binary files /dev/null and b/public/docs/i/2000/tasks/images/cancel-task-audit.webp differ diff --git a/public/docs/i/2000/tasks/images/cancel-task-settings.webp b/public/docs/i/2000/tasks/images/cancel-task-settings.webp new file mode 100644 index 0000000000..8a510ce32f Binary files /dev/null and b/public/docs/i/2000/tasks/images/cancel-task-settings.webp differ diff --git a/public/docs/i/600/tasks/images/cancel-task-audit.webp b/public/docs/i/600/tasks/images/cancel-task-audit.webp new file mode 100644 index 0000000000..c939a2e5e8 Binary files /dev/null and b/public/docs/i/600/tasks/images/cancel-task-audit.webp differ diff --git a/public/docs/i/600/tasks/images/cancel-task-settings.webp b/public/docs/i/600/tasks/images/cancel-task-settings.webp new file mode 100644 index 0000000000..d995abd115 Binary files /dev/null and b/public/docs/i/600/tasks/images/cancel-task-settings.webp differ diff --git a/public/docs/i/x/tasks/images/cancel-task-audit.png b/public/docs/i/x/tasks/images/cancel-task-audit.png new file mode 100644 index 0000000000..44c8587506 Binary files /dev/null and b/public/docs/i/x/tasks/images/cancel-task-audit.png differ diff --git a/public/docs/i/x/tasks/images/cancel-task-settings.png b/public/docs/i/x/tasks/images/cancel-task-settings.png new file mode 100644 index 0000000000..37e52583e5 Binary files /dev/null and b/public/docs/i/x/tasks/images/cancel-task-settings.png differ diff --git a/public/docs/img/tasks/images/cancel-task-audit.png.json b/public/docs/img/tasks/images/cancel-task-audit.png.json new file mode 100644 index 0000000000..39dc927d6c --- /dev/null +++ b/public/docs/img/tasks/images/cancel-task-audit.png.json @@ -0,0 +1 @@ +{"width":2742,"height":736,"updated":"2026-05-01T01:08:58.254Z"} \ No newline at end of file diff --git a/public/docs/img/tasks/images/cancel-task-settings.png.json b/public/docs/img/tasks/images/cancel-task-settings.png.json new file mode 100644 index 0000000000..849ab6b182 --- /dev/null +++ b/public/docs/img/tasks/images/cancel-task-settings.png.json @@ -0,0 +1 @@ +{"width":3358,"height":1522,"updated":"2026-05-01T01:08:58.511Z"} \ No newline at end of file diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md index d9ed20d3e2..efd77fb003 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2024-04-29 -modDate: 2024-07-31 +modDate: 2026-05-01 title: Octopus Kubernetes agent permissions navTitle: Permissions description: Information about what permissions are required and how to adjust them @@ -10,34 +10,35 @@ navOrder: 20 The Kubernetes agent uses service accounts to manage access to cluster objects. -There are 3 main components that run with different permissions in the Kubernetes agent: +There are 2 main components that run with different permissions in the Kubernetes agent: + - **Agent Pod** - This is the main component and is responsible for receiving work from Octopus Server and scheduling it in the cluster. - **Script Pods** - These are run to execute work on the cluster. When Octopus issues work to the agent, the Tentacle will schedule a pod to run the script to execute the required work. These are short-lived, single-use pods which are removed by Tentacle when they are complete. -- **NFS Server Pod** - This optional component is used if no StorageClass is specified during installation. -# Agent Pod Permissions +## Agent Pod Permissions The agent pod uses a service account which only allows the agent to create, view and modify pods, pod logs, config maps, and secrets in the agent namespace. Adjusting these permissions is not supported. -| Variable Name | Description | Default Value | -|:-----------------------------------|:-----------------------------------------|:-------------------------| -| `agent.serviceAccount.name` | The name of the agent service account | `-tentacle` | -| `agent.serviceAccount.annotations` | Annotations given to the service account | `[]` | +| Variable Name | Description | Default Value | +| :--------------------------------- | :--------------------------------------- | :---------------------- | +| `agent.serviceAccount.name` | The name of the agent service account | `-tentacle` | +| `agent.serviceAccount.annotations` | Annotations given to the service account | `[]` | -# Script Pod Permissions +## Script Pod Permissions By default, the script pods (the pods which run your deployment steps) are given cluster wide admin access to deploy any and all cluster objects in any namespaces as configured in your deployment processes. The service account for script pods can be customized in a few ways: | Variable Name | Description | Default Value | -|:----------------------------------------------|:-----------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| :-------------------------------------------- | :--------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `scriptPods.serviceAccount.targetNamespaces` | Limit the namespaces that the service account can interact with. | `[]`
(When empty, all namespaces are allowed.) | | `scriptPods.serviceAccount.clusterRole.rules` | Give the service account custom rules |
- apiGroups:
  - '\*'
  resources:
  - '\*'
  verbs:
  - '\*'
- nonResourceURLs:
  - '\*'
  verbs:
  - '\*'
| -| `scriptPods.serviceAccount.name` | The name of the scriptPods service account | `-scripts` | +| `scriptPods.serviceAccount.name` | The name of the scriptPods service account | `-scripts` | | `scriptPods.serviceAccount.annotations` | Annotations given to the service account | `[]` | ### Examples +
Target Namespaces @@ -46,6 +47,7 @@ The service account for script pods can be customized in a few ways:
**command:** + ```bash helm upgrade --install --atomic \ --set scriptPods.serviceAccount.targetNamespaces="{development,preproduction}" \ @@ -62,6 +64,7 @@ helm upgrade --install --atomic \ my-agent\ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` +
@@ -72,6 +75,7 @@ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent
**values.yaml:** + ```yaml scriptPods: serviceAccount: @@ -102,9 +106,11 @@ agent: - 'k8s-cluster-tag' bearerToken: 'XXXX' ``` +
**command:** + ```bash helm upgrade --install --atomic \ --values values.yaml \ @@ -113,9 +119,5 @@ helm upgrade --install --atomic \ my-agent \ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` -
- -# NFS Server Pod Permissions - -If you have not provided a predefined storageClassName for persistence, an NFS pod will be used. This NFS Server pod requires `privileged` access. For more information see [Kubernetes agent Storage](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage#nfs-storage). + diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md index ca9385cb83..45145cb706 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2024-04-29 -modDate: 2024-07-31 +modDate: 2026-05-01 title: Storage description: How to configure storage for a Kubernetes agent navOrder: 30 @@ -11,92 +11,52 @@ navOrder: 30 The following is applicable to both Kubernetes Agent and Kubernetes Worker. ::: -During a deployment, Octopus Server first sends any required scripts and packages to [Tentacle](https://octopus.com/docs/infrastructure/deployment-targets/tentacle) which writes them to the file system. The actual script execution then takes place in a different process called [Calamari](https://github.com/OctopusDeploy/Calamari), which retrieves the scripts and packages directly from the file system. +During a deployment, Octopus Server first sends any required scripts and packages to [Tentacle](https://octopus.com/docs/infrastructure/deployment-targets/tentacle) which writes them to the file system. The actual script execution then takes place in a different process called [Calamari](https://github.com/OctopusDeploy/Calamari), which retrieves the scripts and packages directly from the file system. On a Kubernetes agent (or worker), scripts are executed in separate Kubernetes pods (script pod) as opposed to in a local shell (Bash/PowerShell). This means the Tentacle pod and script pods don’t automatically share a common file system. Since the Kubernetes agent/worker is built on the Tentacle codebase, it is necessary to configure shared storage so that the Tentacle Pod can write the files in a place that the script pods can read from. -We offer two options for configuring the shared storage - you can use either the default NFS storage or specify a Storage Class name during setup: +We offer two options for configuring the shared storage - you can use either the default ReadWriteOnce cluster default storage or specify a Storage Class name during setup: :::figure ![Kubernetes Agent Wizard Config Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png) ::: +## Cluster default ReadWriteOnce -## NFS storage - -By default, the Kubernetes agent Helm chart will set up an NFS server suitable for use by the agent inside your cluster. The server runs as a `StatefulSet` in the same namespace as the Kubernetes agent, and uses `EmptyDir` storage, as the working files of the agent are not required to be long-lived. - -This NFS server is referenced in the `StorageClass` that the Kubernetes agent and the script pod use. This `StorageClass` will then instruct the `NFS CSI Driver` to mount the server as directed. - -This default implementation is made to let you try the Kubernetes agent without worrying about installing a `ReadWriteMany` compatible `StorageClass` yourself. There are some drawbacks to this approach: - -### Privileges -The NFS server requires `privileged` access when running as a container, which may not be permitted depending on the cluster configuration. Access to the NFS pod should be kept to a minimum since it enables access to the host. - -:::div{.warning} -Red Hat OpenShift does not enable `privileged` access by default. When enabled, we have also encountered inconsistent file access issues using the NFS storage. We highly recommend the use of a [custom storage class](#custom-storage-class) when using Red Hat OpenShift. +:::div{.info} +This is a new default in v3 of the Kubernetes agent ::: -### Reliability -Since the NFS server runs inside your Kubernetes cluster, upgrades and other cluster operations can cause the NFS server to restart. Due to how NFS stores and allows access to shared data, script pods will not be able to recover cleanly from an NFS server restart. This causes running deployments to fail when the NFS server is restarted. +By default, the Kubernetes agent will request the default storage class of the cluster and specify the `ReadWriteOnce` (also known as `RWO`) access mode. As each script pod needs access to the shared storage, this causes the script pods to be scheduled onto the same node as the main tentacle pod. -If you have a use case that can’t tolerate occasional deployment failures, it’s recommended to provide your own `StorageClass` instead of using the default NFS implementation. +As a result, by default, the Kubernetes agent does not spread its work across multiple nodes, but performs all work on the same node. + +This change was made from v2 due to reliability and security concerns with the previously default NFS storage. ## Custom StorageClass \{#custom-storage-class} -If you need a more reliable storage solution, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode. +If distribution of script pods across multiple nodes is desired, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode. Many managed Kubernetes offerings will provide storage that require little effort to set up. These will be a “provisioner” (named as such as they “provision” storage for a `StorageClass`), which you can then tie to a `StorageClass`. Some examples are listed below: -|**Offering** |**Provisioner** |**Default StorageClass name** | -|----------------------------------|-----------------------------------|------------------------------------| -|[Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-storage) |`file.csi.azure.com` |`azurefile` | -|[Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) |`efs.csi.aws.com` |`efs-sc` | -|[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview) |`filestore.csi.storage.gke.io` |`standard-rwx` | +| **Offering** | **Provisioner** | **Default StorageClass name** | +| ----------------------------------------------------------------------------------------------------------- | ------------------------------ | ----------------------------- | +| [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-storage) | `file.csi.azure.com` | `azurefile` | +| [Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) | `efs.csi.aws.com` | `efs-sc` | +| [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview) | `filestore.csi.storage.gke.io` | `standard-rwx` | :::div{.info} See this [blog post](https://octopus.com/blog/efs-eks) for a tutorial on connecting EFS to and EKS cluster. ::: If you manage your own cluster and don’t have offerings from cloud providers available, there are some in-cluster options you could explore: + - [Longhorn](https://longhorn.io/) - [Rook (CephFS)](https://rook.io/) - [GlusterFS](https://www.gluster.org/) -## Migrating from NFS storage to a custom StorageClass +## Azure Files CSI driver -If you installed the Kubernetes agent using the default NFS storage, and want to change to a custom `StorageClass` instead, simply rerun the installation Helm command with specified values for `persistence.storageClassName`. - -The following steps assume your Kubernetes agent is in the `octopus-agent-nfs-to-pv` namespace: - -### Step 1: Find your Helm release {#KubernetesAgentStorage-Step1-FindYourHelmRelease} - -Take note of the current Helm release name and Chart version for your Kubernetes agent by running the following command: -```bash -helm list --namespace octopus-agent-nfs-to-pv -``` - -The output should look like this: -:::figure -![Helm list command](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-helm-list.png) -::: - -In this example, the release name is `nfs-to-pv` while the chart version is `1.0.1`. - -### Step 2: Change Persistence {#KubernetesAgentStorage-Step2-ChangePersistence} - -Run the following command (substitute the placeholders with your own values): -```bash -helm upgrade --reuse-values --atomic --set persistence.storageClassName="" --namespace --version "" oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` -``` - -Here is an example to convert the `nfs-to-pv` Helm release in the `octopus-agent-nfs-to-pv` namespace to use the `octopus-agent-nfs-migration` `StorageClass`: -```bash -helm upgrade --reuse-values --atomic --set persistence.storageClassName="octopus-agent-nfs-migration" --namespace octopus-agent-nfs-to-pv --version "1.0.1" nfs-to-pv oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` -``` - -:::div{.warning} -If you are using an existing `PersistentVolume` via its `StorageClassName`, then you must set the `persistence.size` value in the Helm command to match the capacity of the `PersistentVolume` for the `PersistentVolume` to bind. -::: +It is highly recommended that when specifying a custom storage class that leverages [Azure Files CSI driver](https://learn.microsoft.com/en-us/azure/aks/create-volume-azure-files), that the backing storage account be provision with the `PremiumV2_LRS` or `PremiumV2_ZRS` SKU (`skuname`). This will improve deployment performance due to the high performance profile and low-latency SSD's. diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting/version-specific-notes.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting/version-specific-notes.md index d1f5cff5ed..6e02c7c629 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting/version-specific-notes.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting/version-specific-notes.md @@ -1,15 +1,39 @@ --- layout: src/layouts/Default.astro pubDate: 2026-04-30 -modDate: 2026-04-30 +modDate: 2026-05-01 title: Version specific notes description: Contains a list of version specific notes navOrder: 72 --- +As the capabilities of the Kubernetes agent evolve and change, this page will document major version specific functionality that is different from the current major version. + ## Version 2 -### NFS CSI driver +### NFS storage + +By default, the Kubernetes agent Helm chart will set up an NFS server suitable for use by the agent inside your cluster. The server runs as a `StatefulSet` in the same namespace as the Kubernetes agent, and uses `EmptyDir` storage, as the working files of the agent are not required to be long-lived. + +This NFS server is referenced in the `StorageClass` that the Kubernetes agent and the script pod use. This `StorageClass` will then instruct the `NFS CSI Driver` to mount the server as directed. + +This default implementation is made to let you try the Kubernetes agent without worrying about installing a `ReadWriteMany` compatible `StorageClass` yourself. There are some drawbacks to this approach: + +#### Privileges + +The NFS server requires `privileged` access when running as a container, which may not be permitted depending on the cluster configuration. Access to the NFS pod should be kept to a minimum since it enables access to the host. + +:::div{.warning} +Red Hat OpenShift does not enable `privileged` access by default. When enabled, we have also encountered inconsistent file access issues using the NFS storage. We highly recommend the use of a [custom storage class](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage#custom-storage-class) when using Red Hat OpenShift. +::: + +#### Reliability + +Since the NFS server runs inside your Kubernetes cluster, upgrades and other cluster operations can cause the NFS server to restart. Due to how NFS stores and allows access to shared data, script pods will not be able to recover cleanly from an NFS server restart. This causes running deployments to fail when the NFS server is restarted. + +If you have a use case that can’t tolerate occasional deployment failures, it’s recommended to provide your own `StorageClass` instead of using the default NFS implementation. + +#### NFS CSI driver :::div{.hint} With the release of V3 of the agent, this is no longer required in the default installation. However, if you want to continue using the NFS storage with the v3 agent, you will need to install the NFS CSI driver. @@ -31,3 +55,46 @@ helm repo update ``` ::: + +#### NFS Server Pod Permissions + +If you have not provided a predefined storageClassName for persistence, an NFS pod will be used. This NFS Server pod requires `privileged` access. + +#### Migrating from NFS storage to a custom StorageClass + +If you installed the Kubernetes agent using the default NFS storage, and want to change to a custom `StorageClass` instead, simply rerun the installation Helm command with specified values for `persistence.storageClassName`. + +The following steps assume your Kubernetes agent is in the `octopus-agent-nfs-to-pv` namespace: + +##### Step 1: Find your Helm release {#KubernetesAgentStorage-Step1-FindYourHelmRelease} + +Take note of the current Helm release name and Chart version for your Kubernetes agent by running the following command: + +```bash +helm list --namespace octopus-agent-nfs-to-pv +``` + +The output should look like this: +:::figure +![Helm list command](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-helm-list.png) +::: + +In this example, the release name is `nfs-to-pv` while the chart version is `1.0.1`. + +##### Step 2: Change Persistence {#KubernetesAgentStorage-Step2-ChangePersistence} + +Run the following command (substitute the placeholders with your own values): + +```bash +helm upgrade --reuse-values --atomic --set persistence.storageClassName="" --namespace --version "" oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` +``` + +Here is an example to convert the `nfs-to-pv` Helm release in the `octopus-agent-nfs-to-pv` namespace to use the `octopus-agent-nfs-migration` `StorageClass`: + +```bash +helm upgrade --reuse-values --atomic --set persistence.storageClassName="octopus-agent-nfs-migration" --namespace octopus-agent-nfs-to-pv --version "1.0.1" nfs-to-pv oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` +``` + +:::div{.warning} +If you are using an existing `PersistentVolume` via its `StorageClassName`, then you must set the `persistence.size` value in the Helm command to match the capacity of the `PersistentVolume` for the `PersistentVolume` to bind. +::: diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/upgrading.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/upgrading.md index 3afe84f987..5df5346390 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/upgrading.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/upgrading.md @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2024-08-22 -modDate: 2024-08-22 +modDate: 2026-05-01 title: Upgrading the Agent navTitle: Upgrading navSection: Kubernetes agent @@ -15,9 +15,9 @@ The Kubernetes agent is automatically kept up to date by Octopus Server when run Automatic upgrades can be disabled by updating the machine updates settings in your applied [machine policy](/docs/infrastructure/deployment-targets/machine-policies) -## V1 +## When do we new major versions -Changes to the Kubernetes agent Helm Chart necessitated a breaking change. +Changes to the Kubernetes agent Helm Chart necessitated a breaking change. To make this clear, we perform a major version increase. The version of a Kubernetes agent is found by going to **Infrastructure** then into **DeploymentTargets**; from there click on the **Kubernetes agent** of interest; on its **Connectivity** sub-page you will see 'Current Version'. @@ -25,6 +25,8 @@ The version of a Kubernetes agent is found by going to **Infrastructure** then i ![Kubernetes agent default namespace](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-upgrade-version.png) ::: +## V1 + Installed v1 instances will continue to operate as expected, however they will receive no further updates other than security updates. While you may continue to use v1 of the helm-chart, it is highly recommended to perform an upgrade to v2 to you receive ongoing functional and security updates.