Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions public/docs/img/tasks/images/cancel-task-audit.png.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"width":2742,"height":736,"updated":"2026-05-01T01:08:58.254Z"}
1 change: 1 addition & 0 deletions public/docs/img/tasks/images/cancel-task-settings.png.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"width":3358,"height":1522,"updated":"2026-05-01T01:08:58.511Z"}
34 changes: 18 additions & 16 deletions src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
layout: src/layouts/Default.astro
pubDate: 2024-04-29
modDate: 2024-07-31
modDate: 2026-05-01
title: Octopus Kubernetes agent permissions
navTitle: Permissions
description: Information about what permissions are required and how to adjust them
Expand All @@ -10,34 +10,35 @@ navOrder: 20

The Kubernetes agent uses service accounts to manage access to cluster objects.

There are 3 main components that run with different permissions in the Kubernetes agent:
There are 2 main components that run with different permissions in the Kubernetes agent:

- **Agent Pod** - This is the main component and is responsible for receiving work from Octopus Server and scheduling it in the cluster.
- **Script Pods** - These are run to execute work on the cluster. When Octopus issues work to the agent, the Tentacle will schedule a pod to run the script to execute the required work. These are short-lived, single-use pods which are removed by Tentacle when they are complete.
- **NFS Server Pod** - This optional component is used if no StorageClass is specified during installation.

# Agent Pod Permissions
## Agent Pod Permissions

The agent pod uses a service account which only allows the agent to create, view and modify pods, pod logs, config maps, and secrets in the agent namespace. Adjusting these permissions is not supported.

| Variable Name | Description | Default Value |
|:-----------------------------------|:-----------------------------------------|:-------------------------|
| `agent.serviceAccount.name` | The name of the agent service account | `<agent-name>-tentacle` |
| `agent.serviceAccount.annotations` | Annotations given to the service account | `[]` |
| Variable Name | Description | Default Value |
| :--------------------------------- | :--------------------------------------- | :---------------------- |
| `agent.serviceAccount.name` | The name of the agent service account | `<agent-name>-tentacle` |
| `agent.serviceAccount.annotations` | Annotations given to the service account | `[]` |

# Script Pod Permissions
## Script Pod Permissions

By default, the script pods (the pods which run your deployment steps) are given cluster wide admin access to deploy any and all cluster objects in any namespaces as configured in your deployment processes.

The service account for script pods can be customized in a few ways:

| Variable Name | Description | Default Value |
|:----------------------------------------------|:-----------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| :-------------------------------------------- | :--------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `scriptPods.serviceAccount.targetNamespaces` | Limit the namespaces that the service account can interact with. | `[]`<br/>(When empty, all namespaces are allowed.) |
| `scriptPods.serviceAccount.clusterRole.rules` | Give the service account custom rules | <pre>- apiGroups:<br/>&nbsp;&nbsp;- '\*'<br/>&nbsp;&nbsp;resources:<br/>&nbsp;&nbsp;- '\*'<br/>&nbsp;&nbsp;verbs:<br/>&nbsp;&nbsp;- '\*'<br/>- nonResourceURLs:<br/>&nbsp;&nbsp;- '\*'<br/>&nbsp;&nbsp;verbs:<br/>&nbsp;&nbsp;- '\*'</pre> |
| `scriptPods.serviceAccount.name` | The name of the scriptPods service account | `<agent-name>-scripts` |
| `scriptPods.serviceAccount.name` | The name of the scriptPods service account | `<agent-name>-scripts` |
| `scriptPods.serviceAccount.annotations` | Annotations given to the service account | `[]` |

### Examples

<details data-group="script-pod-value-examples">
<summary>Target Namespaces</summary>

Expand All @@ -46,6 +47,7 @@ The service account for script pods can be customized in a few ways:
<br/>

**command:**

```bash
helm upgrade --install --atomic \
--set scriptPods.serviceAccount.targetNamespaces="{development,preproduction}" \
Expand All @@ -62,6 +64,7 @@ helm upgrade --install --atomic \
my-agent\
oci://registry-1.docker.io/octopusdeploy/kubernetes-agent
```

</details>

<details data-group="script-pod-value-examples">
Expand All @@ -72,6 +75,7 @@ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent
<br/>

**values.yaml:**

```yaml
scriptPods:
serviceAccount:
Expand Down Expand Up @@ -102,9 +106,11 @@ agent:
- 'k8s-cluster-tag'
bearerToken: 'XXXX'
```

<br/>

**command:**

```bash
helm upgrade --install --atomic \
--values values.yaml \
Expand All @@ -113,9 +119,5 @@ helm upgrade --install --atomic \
my-agent \
oci://registry-1.docker.io/octopusdeploy/kubernetes-agent
```
</details>


# NFS Server Pod Permissions

If you have not provided a predefined storageClassName for persistence, an NFS pod will be used. This NFS Server pod requires `privileged` access. For more information see [Kubernetes agent Storage](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage#nfs-storage).
</details>
78 changes: 19 additions & 59 deletions src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
layout: src/layouts/Default.astro
pubDate: 2024-04-29
modDate: 2024-07-31
modDate: 2026-05-01
title: Storage
description: How to configure storage for a Kubernetes agent
navOrder: 30
Expand All @@ -11,92 +11,52 @@ navOrder: 30
The following is applicable to both Kubernetes Agent and Kubernetes Worker.
:::

During a deployment, Octopus Server first sends any required scripts and packages to [Tentacle](https://octopus.com/docs/infrastructure/deployment-targets/tentacle) which writes them to the file system. The actual script execution then takes place in a different process called [Calamari](https://github.com/OctopusDeploy/Calamari), which retrieves the scripts and packages directly from the file system.
During a deployment, Octopus Server first sends any required scripts and packages to [Tentacle](https://octopus.com/docs/infrastructure/deployment-targets/tentacle) which writes them to the file system. The actual script execution then takes place in a different process called [Calamari](https://github.com/OctopusDeploy/Calamari), which retrieves the scripts and packages directly from the file system.

On a Kubernetes agent (or worker), scripts are executed in separate Kubernetes pods (script pod) as opposed to in a local shell (Bash/PowerShell). This means the Tentacle pod and script pods don’t automatically share a common file system.

Since the Kubernetes agent/worker is built on the Tentacle codebase, it is necessary to configure shared storage so that the Tentacle Pod can write the files in a place that the script pods can read from.

We offer two options for configuring the shared storage - you can use either the default NFS storage or specify a Storage Class name during setup:
We offer two options for configuring the shared storage - you can use either the default ReadWriteOnce cluster default storage or specify a Storage Class name during setup:

:::figure
![Kubernetes Agent Wizard Config Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png)
:::

## Cluster default ReadWriteOnce

## NFS storage

By default, the Kubernetes agent Helm chart will set up an NFS server suitable for use by the agent inside your cluster. The server runs as a `StatefulSet` in the same namespace as the Kubernetes agent, and uses `EmptyDir` storage, as the working files of the agent are not required to be long-lived.

This NFS server is referenced in the `StorageClass` that the Kubernetes agent and the script pod use. This `StorageClass` will then instruct the `NFS CSI Driver` to mount the server as directed.

This default implementation is made to let you try the Kubernetes agent without worrying about installing a `ReadWriteMany` compatible `StorageClass` yourself. There are some drawbacks to this approach:

### Privileges
The NFS server requires `privileged` access when running as a container, which may not be permitted depending on the cluster configuration. Access to the NFS pod should be kept to a minimum since it enables access to the host.

:::div{.warning}
Red Hat OpenShift does not enable `privileged` access by default. When enabled, we have also encountered inconsistent file access issues using the NFS storage. We highly recommend the use of a [custom storage class](#custom-storage-class) when using Red Hat OpenShift.
:::div{.info}
This is a new default in v3 of the Kubernetes agent
:::

### Reliability
Since the NFS server runs inside your Kubernetes cluster, upgrades and other cluster operations can cause the NFS server to restart. Due to how NFS stores and allows access to shared data, script pods will not be able to recover cleanly from an NFS server restart. This causes running deployments to fail when the NFS server is restarted.
By default, the Kubernetes agent will request the default storage class of the cluster and specify the `ReadWriteOnce` (also known as `RWO`) access mode. As each script pod needs access to the shared storage, this causes the script pods to be scheduled onto the same node as the main tentacle pod.

If you have a use case that can’t tolerate occasional deployment failures, it’s recommended to provide your own `StorageClass` instead of using the default NFS implementation.
As a result, by default, the Kubernetes agent does not spread its work across multiple nodes, but performs all work on the same node.

This change was made from v2 due to reliability and security concerns with the previously default NFS storage.

## Custom StorageClass \{#custom-storage-class}

If you need a more reliable storage solution, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode.
If distribution of script pods across multiple nodes is desired, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: consider also allowing documenting the use of a non-default RWO storage class. For example:

Suggested change
If distribution of script pods across multiple nodes is desired, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode.
You may also provide an explicit `StorageClass` to use, if you wish. If the `StorageClass` supports `ReadWriteMany` (also known as `RWX`) as an access mode, the agent will be able to scale past a single node.


Many managed Kubernetes offerings will provide storage that require little effort to set up. These will be a “provisioner” (named as such as they “provision” storage for a `StorageClass`), which you can then tie to a `StorageClass`. Some examples are listed below:

|**Offering** |**Provisioner** |**Default StorageClass name** |
|----------------------------------|-----------------------------------|------------------------------------|
|[Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-storage) |`file.csi.azure.com` |`azurefile` |
|[Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) |`efs.csi.aws.com` |`efs-sc` |
|[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview) |`filestore.csi.storage.gke.io` |`standard-rwx` |
| **Offering** | **Provisioner** | **Default StorageClass name** |
| ----------------------------------------------------------------------------------------------------------- | ------------------------------ | ----------------------------- |
| [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-storage) | `file.csi.azure.com` | `azurefile` |
| [Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) | `efs.csi.aws.com` | `efs-sc` |
| [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview) | `filestore.csi.storage.gke.io` | `standard-rwx` |

:::div{.info}
See this [blog post](https://octopus.com/blog/efs-eks) for a tutorial on connecting EFS to and EKS cluster.
:::

If you manage your own cluster and don’t have offerings from cloud providers available, there are some in-cluster options you could explore:

- [Longhorn](https://longhorn.io/)
- [Rook (CephFS)](https://rook.io/)
- [GlusterFS](https://www.gluster.org/)

## Migrating from NFS storage to a custom StorageClass
## Azure Files CSI driver

If you installed the Kubernetes agent using the default NFS storage, and want to change to a custom `StorageClass` instead, simply rerun the installation Helm command with specified values for `persistence.storageClassName`.

The following steps assume your Kubernetes agent is in the `octopus-agent-nfs-to-pv` namespace:

### Step 1: Find your Helm release {#KubernetesAgentStorage-Step1-FindYourHelmRelease}

Take note of the current Helm release name and Chart version for your Kubernetes agent by running the following command:
```bash
helm list --namespace octopus-agent-nfs-to-pv
```

The output should look like this:
:::figure
![Helm list command](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-helm-list.png)
:::

In this example, the release name is `nfs-to-pv` while the chart version is `1.0.1`.

### Step 2: Change Persistence {#KubernetesAgentStorage-Step2-ChangePersistence}

Run the following command (substitute the placeholders with your own values):
```bash
helm upgrade --reuse-values --atomic --set persistence.storageClassName="<storage class>" --namespace <namespace> --version "<chart version>" <release name> oci://registry-1.docker.io/octopusdeploy/kubernetes-agent`
```

Here is an example to convert the `nfs-to-pv` Helm release in the `octopus-agent-nfs-to-pv` namespace to use the `octopus-agent-nfs-migration` `StorageClass`:
```bash
helm upgrade --reuse-values --atomic --set persistence.storageClassName="octopus-agent-nfs-migration" --namespace octopus-agent-nfs-to-pv --version "1.0.1" nfs-to-pv oci://registry-1.docker.io/octopusdeploy/kubernetes-agent`
```

:::div{.warning}
If you are using an existing `PersistentVolume` via its `StorageClassName`, then you must set the `persistence.size` value in the Helm command to match the capacity of the `PersistentVolume` for the `PersistentVolume` to bind.
:::
It is highly recommended that when specifying a custom storage class that leverages [Azure Files CSI driver](https://learn.microsoft.com/en-us/azure/aks/create-volume-azure-files), that the backing storage account be provision with the `PremiumV2_LRS` or `PremiumV2_ZRS` SKU (`skuname`). This will improve deployment performance due to the high performance profile and low-latency SSD's.
Loading
Loading