diff --git a/config/200-clusterrole.yaml b/config/200-clusterrole.yaml index 4012d881179..d46ab8022c9 100644 --- a/config/200-clusterrole.yaml +++ b/config/200-clusterrole.yaml @@ -60,7 +60,7 @@ rules: # Unclear if this access is actually required. Simply a hold-over from the previous # incarnation of the controller's ClusterRole. - apiGroups: ["apps"] - resources: ["deployments"] + resources: ["deployments", "statefulsets"] verbs: ["get", "list", "create", "update", "delete", "patch", "watch"] - apiGroups: ["apps"] resources: ["deployments/finalizers"] diff --git a/docs/install.md b/docs/install.md index d0c12c45b0e..192006a041c 100644 --- a/docs/install.md +++ b/docs/install.md @@ -268,6 +268,19 @@ file lists the keys you can customize along with their default values. To customize the behavior of the Pipelines Controller, modify the ConfigMap `feature-flags` as follows: +- `disable-affinity-assistant` - set this flag to disable the [Affinity Assistant](./workspaces.md#affinity-assistant-and-specifying-workspace-order-in-a-pipeline) + that is used to provide Node Affinity for `TaskRun` pods that share workspace volume. + The Affinity Assistant pods may be incompatible with NodeSelector and other affinity rules + configured for `TaskRun` pods. + + **Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) + that require substantial amount of processing which can slow down scheduling in large clusters + significantly. We do not recommend using them in clusters larger than several hundred nodes + + **Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every + node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes + are missing the specified `topologyKey` label, it can lead to unintended behavior. + - `disable-home-env-overwrite` - set this flag to `true` to prevent Tekton from overriding the `$HOME` environment variable for the containers executing your `Steps`. The default is `false`. For more information, see the [associated issue](https://github.com/tektoncd/pipeline/issues/2013). diff --git a/docs/labels.md b/docs/labels.md index 549c9443550..e553ec4f926 100644 --- a/docs/labels.md +++ b/docs/labels.md @@ -58,6 +58,8 @@ The following labels are added to resources automatically: reference a `ClusterTask` will also receive `tekton.dev/task`. - `tekton.dev/taskRun` is added to `Pods`, and contains the name of the `TaskRun` that created the `Pod`. +- `app.kubernetes.io/instance` and `app.kubernetes.io/component` is added to + Affinity Assistant `StatefulSets` and `Pods`. These are used for Pod Affinity for TaskRuns. ## Examples diff --git a/docs/tasks.md b/docs/tasks.md index 3ab55ac1d86..b31ea188da9 100644 --- a/docs/tasks.md +++ b/docs/tasks.md @@ -363,7 +363,8 @@ steps: ### Specifying `Workspaces` [`Workspaces`](workspaces.md#using-workspaces-in-tasks) allow you to specify -one or more volumes that your `Task` requires during execution. For example: +one or more volumes that your `Task` requires during execution. It is recommended that `Tasks` uses **at most** +one writeable `Workspace`. For example: ```yaml spec: diff --git a/docs/workspaces.md b/docs/workspaces.md index c82f82a145a..6433eece5d0 100644 --- a/docs/workspaces.md +++ b/docs/workspaces.md @@ -15,7 +15,7 @@ weight: 5 - [Mapping `Workspaces` in `Tasks` to `TaskRuns`](#mapping-workspaces-in-tasks-to-taskruns) - [Examples of `TaskRun` definition using `Workspaces`](#examples-of-taskrun-definition-using-workspaces) - [Using `Workspaces` in `Pipelines`](#using-workspaces-in-pipelines) - - [Specifying `Workspace` order in a `Pipeline`](#specifying-workspace-order-in-a-pipeline) + - [Affinity Assistant and specifying `Workspace` order in a `Pipeline`](#affinity-assistant-and-specifying-workspace-order-in-a-pipeline) - [Specifying `Workspaces` in `PipelineRuns`](#specifying-workspaces-in-pipelineruns) - [Example `PipelineRun` definition using `Workspaces`](#example-pipelinerun-definition-using-workspaces) - [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces) @@ -89,7 +89,8 @@ To configure one or more `Workspaces` in a `Task`, add a `workspaces` list with Note the following: -- A `Task` definition can include as many `Workspaces` as it needs. +- A `Task` definition can include as many `Workspaces` as it needs. It is recommended that `Tasks` use + **at most** one _writable_ `Workspace`. - A `readOnly` `Workspace` will have its volume mounted as read-only. Attempting to write to a `readOnly` `Workspace` will result in errors and failed `TaskRuns`. - `mountPath` can be either absolute or relative. Absolute paths start with `/` and relative paths @@ -204,26 +205,27 @@ Include a `subPath` in the workspace binding to mount different parts of the sam The `subPath` specified in a `Pipeline` will be appended to any `subPath` specified as part of the `PipelineRun` workspace declaration. So a `PipelineRun` declaring a Workspace with `subPath` of `/foo` for a `Pipeline` who binds it to a `Task` with `subPath` of `/bar` will end up mounting the `Volume`'s `/foo/bar` directory. -#### Specifying `Workspace` order in a `Pipeline` +#### Affinity Assistant and specifying `Workspace` order in a `Pipeline` Sharing a `Workspace` between `Tasks` requires you to define the order in which those `Tasks` -will be accessing that `Workspace` since different classes of storage have different limits -for concurrent reads and writes. For example, a `PersistentVolumeClaim` with -[access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) -`ReadWriteOnce` only allow `Tasks` on the same node writing to it at once. - -Using parallel `Tasks` in a `Pipeline` will work with `PersistentVolumeClaims` configured with -[access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) -`ReadWriteMany` or `ReadOnlyMany` but you must ensure that those are available for your storage class. -When using `PersistentVolumeClaims` with access mode `ReadWriteOnce` for parallel `Tasks`, you can configure a -workspace with it's own `PersistentVolumeClaim` for each parallel `Task`. - -Use the `runAfter` field in your `Pipeline` definition to define when a `Task` should be executed. For more -information, see the [`runAfter` documentation](pipelines.md#runAfter). - -**Warning:** You *must* ensure that this order is compatible with the configured access modes for your `PersistentVolumeClaim`. -Parallel `Tasks` using the same `PersistentVolumeClaim` with access mode `ReadWriteOnce`, may execute on -different nodes and be forced to execute sequentially which may cause `Tasks` to time out. +write to or read from that `Workspace`. Use the `runAfter` field in your `Pipeline` definition +to define when a `Task` should be executed. For more information, see the [`runAfter` documentation](pipelines.md#runAfter). + +When a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`, +an Affinity Assistant will be created. The Affinity Assistant acts as a placeholder for `TaskRun` pods +sharing the same `Workspace`. All `TaskRun` pods within the `PipelineRun` that share the `Workspace` +will be scheduled to the same Node as the Affinity Assistant pod. This means that Affinity Assistant is incompatible +with e.g. NodeSelectors or other affinity rules configured for the `TaskRun` pods. The Affinity Assistant +is deleted when the `PipelineRun` is completed. The Affinity Assistant can be disabled by setting the +[disable-affinity-assistant](install.md#customizing-basic-execution-parameters) feature gate. + +**Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) +that require substantial amount of processing which can slow down scheduling in large clusters +significantly. We do not recommend using them in clusters larger than several hundred nodes + +**Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every +node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes +are missing the specified `topologyKey` label, it can lead to unintended behavior. #### Specifying `Workspaces` in `PipelineRuns` diff --git a/examples/v1beta1/pipelineruns/pipeline-run-with-parallel-tasks-using-pvc.yaml b/examples/v1beta1/pipelineruns/pipeline-run-with-parallel-tasks-using-pvc.yaml new file mode 100644 index 00000000000..c72bd8d427f --- /dev/null +++ b/examples/v1beta1/pipelineruns/pipeline-run-with-parallel-tasks-using-pvc.yaml @@ -0,0 +1,205 @@ +# This example shows how both sequential and parallel Tasks can share data +# using a PersistentVolumeClaim as a workspace. The TaskRun pods that share +# workspace will be scheduled to the same Node in your cluster with an +# Affinity Assistant (unless it is disabled). The REPORTER task does not +# use a workspace so it does not get affinity to the Affinity Assistant +# and can be scheduled to any Node. If multiple concurrent PipelineRuns are +# executed, their Affinity Assistant pods will repel eachother to different +# Nodes in a Best Effort fashion. +# +# A PipelineRun will pass a message parameter to the Pipeline in this example. +# The STARTER task will write the message to a file in the workspace. The UPPER +# and LOWER tasks will execute in parallel and process the message written by +# the STARTER, and transform it to upper case and lower case. The REPORTER task +# is will use the Task Result from the UPPER task and print it - it is intended +# to mimic a Task that sends data to an external service and shows a Task that +# doesn't use a workspace. The VALIDATOR task will validate the result from +# UPPER and LOWER. +# +# Use the runAfter property in a Pipeline to configure that a task depend on +# another task. Output can be shared both via Task Result (e.g. like REPORTER task) +# or via files in a workspace. +# +# -- (upper) -- (reporter) +# / \ +# (starter) (validator) +# \ / +# -- (lower) ------------ + +apiVersion: tekton.dev/v1beta1 +kind: Pipeline +metadata: + name: parallel-pipeline +spec: + params: + - name: message + type: string + + workspaces: + - name: ws + + tasks: + - name: starter # Tasks that does not declare a runAfter property + taskRef: # will start execution immediately + name: persist-param + params: + - name: message + value: $(params.message) + workspaces: + - name: task-ws + workspace: ws + subPath: init + + - name: upper + runAfter: # Note the use of runAfter here to declare that this task + - starter # depends on a previous task + taskRef: + name: to-upper + params: + - name: input-path + value: init/message + workspaces: + - name: w + workspace: ws + + - name: lower + runAfter: + - starter + taskRef: + name: to-lower + params: + - name: input-path + value: init/message + workspaces: + - name: w + workspace: ws + + - name: reporter # This task does not use workspace and may be scheduled to + runAfter: # any Node in the cluster. + - upper + taskRef: + name: result-reporter + params: + - name: result-to-report + value: $(tasks.upper.results.message) # A result from a previous task is used as param + + - name: validator # This task validate the output from upper and lower Task + runAfter: # It does not strictly depend on the reporter Task + - reporter # But you may want to skip this task if the reporter Task fail + - lower + taskRef: + name: validator + workspaces: + - name: files + workspace: ws +--- +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: persist-param +spec: + params: + - name: message + type: string + results: + - name: message + description: A result message + steps: + - name: write + image: ubuntu + script: echo $(params.message) | tee $(workspaces.task-ws.path)/message $(results.message.path) + workspaces: + - name: task-ws +--- +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: to-upper +spec: + description: | + This task read and process a file from the workspace and write the result + both to a file in the workspace and as a Task Result. + params: + - name: input-path + type: string + results: + - name: message + description: Input message in upper case + steps: + - name: to-upper + image: ubuntu + script: cat $(workspaces.w.path)/$(params.input-path) | tr '[:lower:]' '[:upper:]' | tee $(workspaces.w.path)/upper $(results.message.path) + workspaces: + - name: w +--- +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: to-lower +spec: + description: | + This task read and process a file from the workspace and write the result + both to a file in the workspace and as a Task Result + params: + - name: input-path + type: string + results: + - name: message + description: Input message in lower case + steps: + - name: to-lower + image: ubuntu + script: cat $(workspaces.w.path)/$(params.input-path) | tr '[:upper:]' '[:lower:]' | tee $(workspaces.w.path)/lower $(results.message.path) + workspaces: + - name: w +--- +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: result-reporter +spec: + description: | + This task is supposed to mimic a service that post data from the Pipeline, + e.g. to an remote HTTP service or a Slack notification. + params: + - name: result-to-report + type: string + steps: + - name: report-result + image: ubuntu + script: echo $(params.result-to-report) +--- +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: validator +spec: + steps: + - name: validate-upper + image: ubuntu + script: cat $(workspaces.files.path)/upper | grep HELLO\ TEKTON + - name: validate-lower + image: ubuntu + script: cat $(workspaces.files.path)/lower | grep hello\ tekton + workspaces: + - name: files +--- +apiVersion: tekton.dev/v1beta1 +kind: PipelineRun +metadata: + generateName: parallel-pipelinerun- +spec: + params: + - name: message + value: Hello Tekton + pipelineRef: + name: parallel-pipeline + workspaces: + - name: ws + volumeClaimTemplate: + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/pkg/pod/pod.go b/pkg/pod/pod.go index 98ffae2c62a..a2fbee38f1b 100644 --- a/pkg/pod/pod.go +++ b/pkg/pod/pod.go @@ -26,6 +26,7 @@ import ( "github.com/tektoncd/pipeline/pkg/names" "github.com/tektoncd/pipeline/pkg/system" "github.com/tektoncd/pipeline/pkg/version" + "github.com/tektoncd/pipeline/pkg/workspace" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime/schema" @@ -217,6 +218,17 @@ func MakePod(images pipeline.Images, taskRun *v1beta1.TaskRun, taskSpec v1beta1. return nil, err } + // Using node affinity on taskRuns sharing PVC workspace, with an Affinity Assistant + // is mutually exclusive with other affinity on taskRun pods. If other + // affinity is wanted, that should be added on the Affinity Assistant pod unless + // assistant is disabled. When Affinity Assistant is disabled, an affinityAssistantName is not set. + var affinity *corev1.Affinity + if affinityAssistantName := taskRun.Annotations[workspace.AnnotationAffinityAssistantName]; affinityAssistantName != "" { + affinity = nodeAffinityUsingAffinityAssistant(affinityAssistantName) + } else { + affinity = podTemplate.Affinity + } + mergedPodContainers := stepContainers // Merge sidecar containers with step containers. @@ -263,7 +275,7 @@ func MakePod(images pipeline.Images, taskRun *v1beta1.TaskRun, taskSpec v1beta1. Volumes: volumes, NodeSelector: podTemplate.NodeSelector, Tolerations: podTemplate.Tolerations, - Affinity: podTemplate.Affinity, + Affinity: affinity, SecurityContext: podTemplate.SecurityContext, RuntimeClassName: podTemplate.RuntimeClassName, AutomountServiceAccountToken: podTemplate.AutomountServiceAccountToken, @@ -294,6 +306,25 @@ func MakeLabels(s *v1beta1.TaskRun) map[string]string { return labels } +// nodeAffinityUsingAffinityAssistant achieves Node Affinity for taskRun pods +// sharing PVC workspace by setting PodAffinity so that taskRuns is +// scheduled to the Node were the Affinity Assistant pod is scheduled. +func nodeAffinityUsingAffinityAssistant(affinityAssistantName string) *corev1.Affinity { + return &corev1.Affinity{ + PodAffinity: &corev1.PodAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: []corev1.PodAffinityTerm{{ + LabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + workspace.LabelInstance: affinityAssistantName, + workspace.LabelComponent: workspace.ComponentNameAffinityAssistant, + }, + }, + TopologyKey: "kubernetes.io/hostname", + }}, + }, + } +} + // getLimitRangeMinimum gets all LimitRanges in a namespace and // searches for if a container minimum is specified. Due to // https://github.com/kubernetes/kubernetes/issues/79496, the diff --git a/pkg/pod/pod_test.go b/pkg/pod/pod_test.go index 1a41b6ac942..c94b5abf62a 100644 --- a/pkg/pod/pod_test.go +++ b/pkg/pod/pod_test.go @@ -75,6 +75,7 @@ func TestMakePod(t *testing.T) { for _, c := range []struct { desc string trs v1beta1.TaskRunSpec + trAnnotation map[string]string ts v1beta1.TaskSpec want *corev1.PodSpec wantAnnotations map[string]string @@ -775,7 +776,66 @@ script-heredoc-randomly-generated-78c5n TerminationMessagePath: "/tekton/termination", }}, }, - }} { + }, { + desc: "with a propagated Affinity Assistant name - expect proper affinity", + ts: v1beta1.TaskSpec{ + Steps: []v1beta1.Step{ + { + Container: corev1.Container{ + Name: "name", + Image: "image", + Command: []string{"cmd"}, // avoid entrypoint lookup. + }, + }, + }, + }, + trAnnotation: map[string]string{ + "pipeline.tekton.dev/affinity-assistant": "random-name-123", + }, + trs: v1beta1.TaskRunSpec{ + PodTemplate: &v1beta1.PodTemplate{}, + }, + want: &corev1.PodSpec{ + Affinity: &corev1.Affinity{ + PodAffinity: &corev1.PodAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: []corev1.PodAffinityTerm{{ + LabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "app.kubernetes.io/instance": "random-name-123", + "app.kubernetes.io/component": "affinity-assistant", + }, + }, + TopologyKey: "kubernetes.io/hostname", + }}, + }, + }, + RestartPolicy: corev1.RestartPolicyNever, + InitContainers: []corev1.Container{placeToolsInit}, + HostNetwork: false, + Volumes: append(implicitVolumes, toolsVolume, downwardVolume), + Containers: []corev1.Container{{ + Name: "step-name", + Image: "image", + Command: []string{"/tekton/tools/entrypoint"}, + Args: []string{ + "-wait_file", + "/tekton/downward/ready", + "-wait_file_content", + "-post_file", + "/tekton/tools/0", + "-termination_path", + "/tekton/termination", + "-entrypoint", + "cmd", + "--", + }, + Env: implicitEnvVars, + VolumeMounts: append([]corev1.VolumeMount{toolsMount, downwardMount}, implicitVolumeMounts...), + WorkingDir: pipeline.WorkspaceDir, + Resources: corev1.ResourceRequirements{Requests: allZeroQty()}, + TerminationMessagePath: "/tekton/termination", + }}, + }}} { t.Run(c.desc, func(t *testing.T) { names.TestingSeed() kubeclient := fakek8s.NewSimpleClientset( @@ -800,12 +860,19 @@ script-heredoc-randomly-generated-78c5n }, }, ) + var trAnnotations map[string]string + if c.trAnnotation == nil { + trAnnotations = map[string]string{ + ReleaseAnnotation: ReleaseAnnotationValue, + } + } else { + trAnnotations = c.trAnnotation + trAnnotations[ReleaseAnnotation] = ReleaseAnnotationValue + } tr := &v1beta1.TaskRun{ ObjectMeta: metav1.ObjectMeta{ - Name: "taskrun-name", - Annotations: map[string]string{ - ReleaseAnnotation: ReleaseAnnotationValue, - }, + Name: "taskrun-name", + Annotations: trAnnotations, }, Spec: c.trs, } diff --git a/pkg/reconciler/pipelinerun/affinity_assistant.go b/pkg/reconciler/pipelinerun/affinity_assistant.go new file mode 100644 index 00000000000..f984f3377c1 --- /dev/null +++ b/pkg/reconciler/pipelinerun/affinity_assistant.go @@ -0,0 +1,212 @@ +/* +Copyright 2020 The Tekton Authors + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package pipelinerun + +import ( + "fmt" + + "github.com/tektoncd/pipeline/pkg/apis/pipeline" + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1" + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1" + "github.com/tektoncd/pipeline/pkg/pod" + "github.com/tektoncd/pipeline/pkg/reconciler/volumeclaim" + "github.com/tektoncd/pipeline/pkg/system" + "github.com/tektoncd/pipeline/pkg/workspace" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + errorutils "k8s.io/apimachinery/pkg/util/errors" +) + +const ( + // ReasonCouldntCreateAffinityAssistantStatefulSet indicates that a PipelineRun uses workspaces with PersistentVolumeClaim + // as a volume source and expect an Assistant StatefulSet, but couldn't create a StatefulSet. + ReasonCouldntCreateAffinityAssistantStatefulSet = "CouldntCreateAffinityAssistantStatefulSet" + + featureFlagDisableAffinityAssistantKey = "disable-affinity-assistant" + affinityAssistantStatefulSetNamePrefix = "affinity-assistant-" +) + +// createAffinityAssistants creates an Affinity Assistant StatefulSet for every workspace in the PipelineRun that +// use a PersistentVolumeClaim volume. This is done to achieve Node Affinity for all TaskRuns that +// share the workspace volume and make it possible for the tasks to execute parallel while sharing volume. +func (c *Reconciler) createAffinityAssistants(wb []v1alpha1.WorkspaceBinding, pr *v1beta1.PipelineRun, namespace string) error { + var errs []error + for _, w := range wb { + if w.PersistentVolumeClaim != nil || w.VolumeClaimTemplate != nil { + affinityAssistantName := getAffinityAssistantName(w.Name, pr.GetOwnerReference()) + affinityAssistantStatefulSetName := affinityAssistantStatefulSetNamePrefix + affinityAssistantName + _, err := c.KubeClientSet.AppsV1().StatefulSets(namespace).Get(affinityAssistantStatefulSetName, metav1.GetOptions{}) + claimName := getClaimName(w, pr.GetOwnerReference()) + switch { + case apierrors.IsNotFound(err): + _, err := c.KubeClientSet.AppsV1().StatefulSets(namespace).Create(affinityAssistantStatefulSet(affinityAssistantName, pr, claimName)) + if err != nil { + errs = append(errs, fmt.Errorf("failed to create StatefulSet %s: %s", affinityAssistantName, err)) + } + if err == nil { + c.Logger.Infof("Created StatefulSet %s in namespace %s", affinityAssistantName, namespace) + } + case err != nil: + errs = append(errs, fmt.Errorf("failed to retrieve StatefulSet %s: %s", affinityAssistantName, err)) + } + } + } + return errorutils.NewAggregate(errs) +} + +func getClaimName(w v1beta1.WorkspaceBinding, ownerReference metav1.OwnerReference) string { + if w.PersistentVolumeClaim != nil { + return w.PersistentVolumeClaim.ClaimName + } else if w.VolumeClaimTemplate != nil { + return volumeclaim.GetPersistentVolumeClaimName(w.VolumeClaimTemplate, w, ownerReference) + } + + return "" +} + +func (c *Reconciler) cleanupAffinityAssistants(pr *v1beta1.PipelineRun) error { + var errs []error + for _, w := range pr.Spec.Workspaces { + if w.PersistentVolumeClaim != nil || w.VolumeClaimTemplate != nil { + affinityAssistantStsName := affinityAssistantStatefulSetNamePrefix + getAffinityAssistantName(w.Name, pr.GetOwnerReference()) + if err := c.KubeClientSet.AppsV1().StatefulSets(pr.Namespace).Delete(affinityAssistantStsName, &metav1.DeleteOptions{}); err != nil { + errs = append(errs, fmt.Errorf("failed to delete StatefulSet %s: %s", affinityAssistantStsName, err)) + } + } + } + return errorutils.NewAggregate(errs) +} + +func getAffinityAssistantName(pipelineWorkspaceName string, owner metav1.OwnerReference) string { + return fmt.Sprintf("%s-%s", pipelineWorkspaceName, owner.Name) +} + +func getStatefulSetLabels(pr *v1beta1.PipelineRun, affinityAssistantName string) map[string]string { + // Propagate labels from PipelineRun to StatefulSet. + labels := make(map[string]string, len(pr.ObjectMeta.Labels)+1) + for key, val := range pr.ObjectMeta.Labels { + labels[key] = val + } + labels[pipeline.GroupName+pipeline.PipelineRunLabelKey] = pr.Name + + // LabelInstance is used to configure PodAffinity for all TaskRuns belonging to this Affinity Assistant + // LabelComponent is used to configure PodAntiAffinity to other Affinity Assistants + labels[workspace.LabelInstance] = affinityAssistantName + labels[workspace.LabelComponent] = workspace.ComponentNameAffinityAssistant + return labels +} + +func affinityAssistantStatefulSet(name string, pr *v1beta1.PipelineRun, claimName string) *appsv1.StatefulSet { + // We want a singleton pod + replicas := int32(1) + + containers := []corev1.Container{{ + Name: "affinity-assistant", + + //TODO(#2640) We may want to create a custom, minimal binary + Image: "nginx", + + // Set requests == limits to get QoS class _Guaranteed_. + // See https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed + // Affinity Assistant pod is a placeholder; request minimal resources + Resources: corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + "cpu": resource.MustParse("100m"), + "memory": resource.MustParse("100Mi"), + }, + Requests: corev1.ResourceList{ + "cpu": resource.MustParse("100m"), + "memory": resource.MustParse("100Mi"), + }, + }, + }} + + // use podAntiAffinity to repel other affinity assistants + repelOtherAffinityAssistantsPodAffinityTerm := corev1.WeightedPodAffinityTerm{ + Weight: 100, + PodAffinityTerm: corev1.PodAffinityTerm{ + LabelSelector: &metav1.LabelSelector{ + MatchLabels: map[string]string{ + workspace.LabelComponent: workspace.ComponentNameAffinityAssistant, + }, + }, + TopologyKey: "kubernetes.io/hostname", + }, + } + + return &appsv1.StatefulSet{ + TypeMeta: metav1.TypeMeta{ + Kind: "StatefulSet", + APIVersion: "apps/v1", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: affinityAssistantStatefulSetNamePrefix + name, + Labels: getStatefulSetLabels(pr, name), + OwnerReferences: []metav1.OwnerReference{pr.GetOwnerReference()}, + }, + Spec: appsv1.StatefulSetSpec{ + Replicas: &replicas, + Selector: &metav1.LabelSelector{ + MatchLabels: getStatefulSetLabels(pr, name), + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: getStatefulSetLabels(pr, name), + }, + Spec: corev1.PodSpec{ + Containers: containers, + Affinity: &corev1.Affinity{ + PodAntiAffinity: &corev1.PodAntiAffinity{ + PreferredDuringSchedulingIgnoredDuringExecution: []corev1.WeightedPodAffinityTerm{repelOtherAffinityAssistantsPodAffinityTerm}, + }, + }, + Volumes: []corev1.Volume{{ + Name: "workspace", + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + + // A Pod mounting a PersistentVolumeClaim that has a StorageClass with + // volumeBindingMode: Immediate + // the PV is allocated on a Node first, and then the pod need to be + // scheduled to that node. + // To support those PVCs, the Affinity Assistant must also mount the + // same PersistentVolumeClaim - to be sure that the Affinity Assistant + // pod is scheduled to the same Availability Zone as the PV, when using + // a regional cluster. This is called VolumeScheduling. + ClaimName: claimName, + }}, + }}, + }, + }, + }, + } +} + +// disableAffinityAssistant returns a bool indicating whether an Affinity Assistant should +// be created for each PipelineRun that use workspaces with PersistentVolumeClaims +// as volume source. The default behaviour is to enable the Affinity Assistant to +// provide Node Affinity for TaskRuns that share a PVC workspace. +func (c *Reconciler) disableAffinityAssistant() bool { + configMap, err := c.KubeClientSet.CoreV1().ConfigMaps(system.GetNamespace()).Get(pod.GetFeatureFlagsConfigName(), metav1.GetOptions{}) + if err == nil && configMap != nil && configMap.Data != nil && configMap.Data[featureFlagDisableAffinityAssistantKey] == "true" { + return true + } + return false +} diff --git a/pkg/reconciler/pipelinerun/affinity_assistant_test.go b/pkg/reconciler/pipelinerun/affinity_assistant_test.go new file mode 100644 index 00000000000..e3e7b7c374e --- /dev/null +++ b/pkg/reconciler/pipelinerun/affinity_assistant_test.go @@ -0,0 +1,131 @@ +/* +Copyright 2020 The Tekton Authors + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package pipelinerun + +import ( + "fmt" + "testing" + + "github.com/tektoncd/pipeline/pkg/apis/pipeline" + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1" + "github.com/tektoncd/pipeline/pkg/pod" + "github.com/tektoncd/pipeline/pkg/reconciler" + "github.com/tektoncd/pipeline/pkg/system" + "go.uber.org/zap" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + fakek8s "k8s.io/client-go/kubernetes/fake" +) + +// TestCreateAndDeleteOfAffinityAssistant tests to create and delete an Affinity Assistant +// for a given PipelineRun with a PVC workspace +func TestCreateAndDeleteOfAffinityAssistant(t *testing.T) { + c := Reconciler{ + Base: &reconciler.Base{ + KubeClientSet: fakek8s.NewSimpleClientset(), + Images: pipeline.Images{}, + Logger: zap.NewExample().Sugar(), + }, + } + + workspaceName := "testws" + pipelineRunName := "pipelinerun-1" + testPipelineRun := &v1beta1.PipelineRun{ + TypeMeta: metav1.TypeMeta{Kind: "PipelineRun"}, + ObjectMeta: metav1.ObjectMeta{ + Name: pipelineRunName, + }, + Spec: v1beta1.PipelineRunSpec{ + Workspaces: []v1beta1.WorkspaceBinding{{ + Name: workspaceName, + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: "myclaim", + }, + }}, + }, + } + + err := c.createAffinityAssistants(testPipelineRun.Spec.Workspaces, testPipelineRun, testPipelineRun.Namespace) + if err != nil { + t.Errorf("unexpected error from createAffinityAssistants: %v", err) + } + + expectedAffinityAssistantName := affinityAssistantStatefulSetNamePrefix + fmt.Sprintf("%s-%s", workspaceName, pipelineRunName) + _, err = c.KubeClientSet.AppsV1().StatefulSets(testPipelineRun.Namespace).Get(expectedAffinityAssistantName, metav1.GetOptions{}) + if err != nil { + t.Errorf("unexpected error when retrieving StatefulSet: %v", err) + } + + err = c.cleanupAffinityAssistants(testPipelineRun) + if err != nil { + t.Errorf("unexpected error from cleanupAffinityAssistants: %v", err) + } + + _, err = c.KubeClientSet.AppsV1().StatefulSets(testPipelineRun.Namespace).Get(expectedAffinityAssistantName, metav1.GetOptions{}) + if !apierrors.IsNotFound(err) { + t.Errorf("expected a NotFound response, got: %v", err) + } +} + +func TestDisableAffinityAssistant(t *testing.T) { + for _, tc := range []struct { + description string + configMap *corev1.ConfigMap + expected bool + }{{ + description: "Default behaviour: A missing disable-affinity-assistant flag should result in false", + configMap: &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{Name: pod.GetFeatureFlagsConfigName(), Namespace: system.GetNamespace()}, + Data: map[string]string{}, + }, + expected: false, + }, { + description: "Setting disable-affinity-assistant to false should result in false", + configMap: &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{Name: pod.GetFeatureFlagsConfigName(), Namespace: system.GetNamespace()}, + Data: map[string]string{ + featureFlagDisableAffinityAssistantKey: "false", + }, + }, + expected: false, + }, { + description: "Setting disable-affinity-assistant to true should result in true", + configMap: &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{Name: pod.GetFeatureFlagsConfigName(), Namespace: system.GetNamespace()}, + Data: map[string]string{ + featureFlagDisableAffinityAssistantKey: "true", + }, + }, + expected: true, + }} { + t.Run(tc.description, func(t *testing.T) { + c := Reconciler{ + Base: &reconciler.Base{ + KubeClientSet: fakek8s.NewSimpleClientset( + tc.configMap, + ), + Images: pipeline.Images{}, + Logger: zap.NewExample().Sugar(), + }, + } + if result := c.disableAffinityAssistant(); result != tc.expected { + t.Errorf("Expected %t Received %t", tc.expected, result) + } + }) + } +} diff --git a/pkg/reconciler/pipelinerun/pipelinerun.go b/pkg/reconciler/pipelinerun/pipelinerun.go index 43d9d621683..ccdd9af648d 100644 --- a/pkg/reconciler/pipelinerun/pipelinerun.go +++ b/pkg/reconciler/pipelinerun/pipelinerun.go @@ -42,6 +42,7 @@ import ( "github.com/tektoncd/pipeline/pkg/reconciler/pipelinerun/resources" "github.com/tektoncd/pipeline/pkg/reconciler/taskrun" "github.com/tektoncd/pipeline/pkg/reconciler/volumeclaim" + "github.com/tektoncd/pipeline/pkg/workspace" "go.uber.org/zap" corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/equality" @@ -183,6 +184,10 @@ func (c *Reconciler) Reconcile(ctx context.Context, key string) error { c.Logger.Errorf("Failed to delete PVC for PipelineRun %s: %v", pr.Name, err) return err } + if err := c.cleanupAffinityAssistants(pr); err != nil { + c.Logger.Errorf("Failed to delete StatefulSet for PipelineRun %s: %v", pr.Name, err) + return err + } c.timeoutHandler.Release(pr) if err := c.updateTaskRunsStatusDirectly(pr); err != nil { c.Logger.Errorf("Failed to update TaskRun status for PipelineRun %s: %v", pr.Name, err) @@ -552,18 +557,35 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1beta1.PipelineRun) err } } - if pipelineState.IsBeforeFirstTaskRun() && pr.HasVolumeClaimTemplate() { - // create workspace PVC from template - if err = c.pvcHandler.CreatePersistentVolumeClaimsForWorkspaces(pr.Spec.Workspaces, pr.GetOwnerReference(), pr.Namespace); err != nil { - c.Logger.Errorf("Failed to create PVC for PipelineRun %s: %v", pr.Name, err) - pr.Status.SetCondition(&apis.Condition{ - Type: apis.ConditionSucceeded, - Status: corev1.ConditionFalse, - Reason: volumeclaim.ReasonCouldntCreateWorkspacePVC, - Message: fmt.Sprintf("Failed to create PVC for PipelineRun %s Workspaces correctly: %s", - fmt.Sprintf("%s/%s", pr.Namespace, pr.Name), err), - }) - return nil + if pipelineState.IsBeforeFirstTaskRun() { + if pr.HasVolumeClaimTemplate() { + // create workspace PVC from template + if err = c.pvcHandler.CreatePersistentVolumeClaimsForWorkspaces(pr.Spec.Workspaces, pr.GetOwnerReference(), pr.Namespace); err != nil { + c.Logger.Errorf("Failed to create PVC for PipelineRun %s: %v", pr.Name, err) + pr.Status.SetCondition(&apis.Condition{ + Type: apis.ConditionSucceeded, + Status: corev1.ConditionFalse, + Reason: volumeclaim.ReasonCouldntCreateWorkspacePVC, + Message: fmt.Sprintf("Failed to create PVC for PipelineRun %s Workspaces correctly: %s", + fmt.Sprintf("%s/%s", pr.Namespace, pr.Name), err), + }) + return nil + } + } + + if !c.disableAffinityAssistant() { + // create Affinity Assistant (StatefulSet) so that taskRun pods that share workspace PVC achieve Node Affinity + if err = c.createAffinityAssistants(pr.Spec.Workspaces, pr, pr.Namespace); err != nil { + c.Logger.Errorf("Failed to create affinity assistant StatefulSet for PipelineRun %s: %v", pr.Name, err) + pr.Status.SetCondition(&apis.Condition{ + Type: apis.ConditionSucceeded, + Status: corev1.ConditionFalse, + Reason: ReasonCouldntCreateAffinityAssistantStatefulSet, + Message: fmt.Sprintf("Failed to create StatefulSet for PipelineRun %s correctly: %s", + fmt.Sprintf("%s/%s", pr.Namespace, pr.Name), err), + }) + return nil + } } } @@ -747,6 +769,7 @@ func (c *Reconciler) createTaskRun(rprt *resources.ResolvedPipelineRunTask, pr * tr.Spec.TaskSpec = rprt.ResolvedTaskResources.TaskSpec } + var pipelinePVCWorkspaceName string pipelineRunWorkspaces := make(map[string]v1beta1.WorkspaceBinding) for _, binding := range pr.Spec.Workspaces { pipelineRunWorkspaces[binding.Name] = binding @@ -754,12 +777,19 @@ func (c *Reconciler) createTaskRun(rprt *resources.ResolvedPipelineRunTask, pr * for _, ws := range rprt.PipelineTask.Workspaces { taskWorkspaceName, pipelineTaskSubPath, pipelineWorkspaceName := ws.Name, ws.SubPath, ws.Workspace if b, hasBinding := pipelineRunWorkspaces[pipelineWorkspaceName]; hasBinding { + if b.PersistentVolumeClaim != nil || b.VolumeClaimTemplate != nil { + pipelinePVCWorkspaceName = pipelineWorkspaceName + } tr.Spec.Workspaces = append(tr.Spec.Workspaces, taskWorkspaceByWorkspaceVolumeSource(b, taskWorkspaceName, pipelineTaskSubPath, pr.GetOwnerReference())) } else { return nil, fmt.Errorf("expected workspace %q to be provided by pipelinerun for pipeline task %q", pipelineWorkspaceName, rprt.PipelineTask.Name) } } + if !c.disableAffinityAssistant() && pipelinePVCWorkspaceName != "" { + tr.Annotations[workspace.AnnotationAffinityAssistantName] = getAffinityAssistantName(pipelinePVCWorkspaceName, pr.GetOwnerReference()) + } + resources.WrapSteps(&tr.Spec, rprt.PipelineTask, rprt.ResolvedTaskResources.Inputs, rprt.ResolvedTaskResources.Outputs, storageBasePath) c.Logger.Infof("Creating a new TaskRun object %s", rprt.TaskRunName) return c.PipelineClientSet.TektonV1beta1().TaskRuns(pr.Namespace).Create(tr) diff --git a/pkg/reconciler/pipelinerun/pipelinerun_test.go b/pkg/reconciler/pipelinerun/pipelinerun_test.go index ad5910b7473..ad996419d0e 100644 --- a/pkg/reconciler/pipelinerun/pipelinerun_test.go +++ b/pkg/reconciler/pipelinerun/pipelinerun_test.go @@ -41,6 +41,7 @@ import ( "github.com/tektoncd/pipeline/test/names" "go.uber.org/zap" "go.uber.org/zap/zaptest/observer" + appsv1 "k8s.io/api/apps/v1" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" @@ -1778,6 +1779,119 @@ func ensurePVCCreated(t *testing.T, clients test.Clients, name, namespace string } } +// TestReconcileWithAffinityAssistantStatefulSet tests that given a pipelineRun with workspaces, +// an Affinity Assistant StatefulSet is created for each PVC workspace and +// that the Affinity Assistant names is propagated to TaskRuns. +func TestReconcileWithAffinityAssistantStatefulSet(t *testing.T) { + workspaceName := "ws1" + workspaceName2 := "ws2" + emptyDirWorkspace := "emptyDirWorkspace" + pipelineRunName := "test-pipeline-run" + ps := []*v1beta1.Pipeline{tb.Pipeline("test-pipeline", tb.PipelineNamespace("foo"), tb.PipelineSpec( + tb.PipelineTask("hello-world-1", "hello-world", tb.PipelineTaskWorkspaceBinding("taskWorkspaceName", workspaceName, "")), + tb.PipelineTask("hello-world-2", "hello-world", tb.PipelineTaskWorkspaceBinding("taskWorkspaceName", workspaceName2, "")), + tb.PipelineTask("hello-world-3", "hello-world", tb.PipelineTaskWorkspaceBinding("taskWorkspaceName", emptyDirWorkspace, "")), + tb.PipelineWorkspaceDeclaration(workspaceName, workspaceName2, emptyDirWorkspace), + ))} + + prs := []*v1beta1.PipelineRun{tb.PipelineRun(pipelineRunName, tb.PipelineRunNamespace("foo"), + tb.PipelineRunSpec("test-pipeline", + tb.PipelineRunWorkspaceBindingVolumeClaimTemplate(workspaceName, "myclaim", ""), + tb.PipelineRunWorkspaceBindingVolumeClaimTemplate(workspaceName2, "myclaim2", ""), + tb.PipelineRunWorkspaceBindingEmptyDir(emptyDirWorkspace), + )), + } + ts := []*v1beta1.Task{tb.Task("hello-world", tb.TaskNamespace("foo"))} + + d := test.Data{ + PipelineRuns: prs, + Pipelines: ps, + Tasks: ts, + } + + testAssets, cancel := getPipelineRunController(t, d) + defer cancel() + c := testAssets.Controller + clients := testAssets.Clients + + err := c.Reconciler.Reconcile(context.Background(), "foo/test-pipeline-run") + if err != nil { + t.Errorf("Did not expect to see error when reconciling PipelineRun but saw %s", err) + } + + // Check that the PipelineRun was reconciled correctly + reconciledRun, err := clients.Pipeline.TektonV1beta1().PipelineRuns("foo").Get(pipelineRunName, metav1.GetOptions{}) + if err != nil { + t.Fatalf("Somehow had error getting reconciled run out of fake client: %s", err) + } + + // Check that the expected StatefulSet was created + stsNames := make([]string, 0) + for _, a := range clients.Kube.Actions() { + if ca, ok := a.(ktesting.CreateAction); ok { + obj := ca.GetObject() + if sts, ok := obj.(*appsv1.StatefulSet); ok { + stsNames = append(stsNames, sts.Name) + } + } + } + + if len(stsNames) != 2 { + t.Fatalf("expected one StatefulSet created. %d was created", len(stsNames)) + } + + expectedAffinityAssistantName1 := fmt.Sprintf("%s-%s", workspaceName, pipelineRunName) + expectedAffinityAssistantName2 := fmt.Sprintf("%s-%s", workspaceName2, pipelineRunName) + expectedStsName1 := affinityAssistantStatefulSetNamePrefix + expectedAffinityAssistantName1 + expectedStsName2 := affinityAssistantStatefulSetNamePrefix + expectedAffinityAssistantName2 + expectedAffinityAssistantStsNames := make(map[string]bool) + expectedAffinityAssistantStsNames[expectedStsName1] = true + expectedAffinityAssistantStsNames[expectedStsName2] = true + for _, stsName := range stsNames { + _, found := expectedAffinityAssistantStsNames[stsName] + if !found { + t.Errorf("unexpected StatefulSet created, named %s", stsName) + } + } + + taskRuns, err := clients.Pipeline.TektonV1beta1().TaskRuns("foo").List(metav1.ListOptions{}) + if err != nil { + t.Fatalf("unexpected error when listing TaskRuns: %v", err) + } + + if len(taskRuns.Items) != 3 { + t.Errorf("expected two TaskRuns created. %d was created", len(taskRuns.Items)) + } + + taskRunsWithPropagatedAffinityAssistantName := 0 + for _, tr := range taskRuns.Items { + for _, ws := range tr.Spec.Workspaces { + propagatedAffinityAssistantName := tr.Annotations["pipeline.tekton.dev/affinity-assistant"] + if ws.PersistentVolumeClaim != nil { + + if propagatedAffinityAssistantName != expectedAffinityAssistantName1 && propagatedAffinityAssistantName != expectedAffinityAssistantName2 { + t.Fatalf("found taskRun with PVC workspace, but with unexpected AffinityAssistantAnnotation value; expected %s or %s, got %s", expectedAffinityAssistantName1, expectedAffinityAssistantName2, propagatedAffinityAssistantName) + } + taskRunsWithPropagatedAffinityAssistantName++ + } + + if ws.PersistentVolumeClaim == nil { + if propagatedAffinityAssistantName != "" { + t.Fatalf("found taskRun workspace that is not PVC workspace, but with unexpected AffinityAssistantAnnotation; expected NO AffinityAssistantAnnotation, got %s", propagatedAffinityAssistantName) + } + } + } + } + + if taskRunsWithPropagatedAffinityAssistantName != 2 { + t.Errorf("expected only one of two TaskRuns to have Affinity Assistant affinity. %d was detected", taskRunsWithPropagatedAffinityAssistantName) + } + + if !reconciledRun.Status.GetCondition(apis.ConditionSucceeded).IsUnknown() { + t.Errorf("Expected PipelineRun to be running, but condition status is %s", reconciledRun.Status.GetCondition(apis.ConditionSucceeded)) + } +} + // TestReconcileWithVolumeClaimTemplateWorkspace tests that given a pipeline with volumeClaimTemplate workspace, // a PVC is created and that the workspace appears as a PersistentVolumeClaim workspace for TaskRuns. func TestReconcileWithVolumeClaimTemplateWorkspace(t *testing.T) { diff --git a/pkg/workspace/affinity_assistant_names.go b/pkg/workspace/affinity_assistant_names.go new file mode 100644 index 00000000000..cbe7464aca9 --- /dev/null +++ b/pkg/workspace/affinity_assistant_names.go @@ -0,0 +1,29 @@ +/* +Copyright 2020 The Tekton Authors + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package workspace + +const ( + // LabelInstance is used in combination with LabelComponent to configure PodAffinity for TaskRun pods + LabelInstance = "app.kubernetes.io/instance" + + // LabelComponent is used to configure PodAntiAffinity to other Affinity Assistants + LabelComponent = "app.kubernetes.io/component" + ComponentNameAffinityAssistant = "affinity-assistant" + + // AnnotationAffinityAssistantName is used to pass the instance name of an Affinity Assistant to TaskRun pods + AnnotationAffinityAssistantName = "pipeline.tekton.dev/affinity-assistant" +)