diff --git a/install_config/customizing-configurations-in-the-tektonconfig-cr.adoc b/install_config/customizing-configurations-in-the-tektonconfig-cr.adoc index e6ff8d04f0b8..b337d4b2b144 100644 --- a/install_config/customizing-configurations-in-the-tektonconfig-cr.adoc +++ b/install_config/customizing-configurations-in-the-tektonconfig-cr.adoc @@ -13,9 +13,15 @@ In {pipelines-title}, you can customize the following configurations by using th * Changing the default service account * Disabling the service monitor * Configuring pipeline resolvers +* Configuring pipeline resolver timeouts +* Configuring resolver caching * Disabling pipeline templates * Disabling the integration of {tekton-hub} * Disabling the automatic creation of RBAC resources +* Customizing {tekton-results} deployments +* Configuring fine-grained retention policies for {tekton-results} +* Generating cosign key pairs for {tekton-chains} +* Configuring automatic cancellation for {pac} * Pruning of task runs and pipeline runs [id="prerequisites_customizing-configurations-in-the-tektonconfig-cr"] @@ -41,6 +47,10 @@ include::modules/op-disabling-the-service-monitor.adoc[leveloffset=+1] include::modules/op-configuring-pipeline-resolvers.adoc[leveloffset=+1] +include::modules/op-configuring-pipeline-resolver-timeouts.adoc[leveloffset=+1] + +include::modules/op-configuring-resolver-caching.adoc[leveloffset=+1] + include::modules/op-disabling-pipeline-templates.adoc[leveloffset=+1] include::modules/op-disabling-pipeline-triggers.adoc[leveloffset=+1] @@ -58,6 +68,14 @@ include::modules/op-disabling-inline-spec.adoc[leveloffset=+1] include::modules/op-configuration-rbac-trusted-ca-flags.adoc[leveloffset=+1] +include::modules/op-customizing-tekton-results-deployments.adoc[leveloffset=+1] + +include::modules/op-configuring-tekton-results-retention-policies.adoc[leveloffset=+1] + +include::modules/op-generating-cosign-key-pairs.adoc[leveloffset=+1] + +include::modules/op-configuring-pac-cancel-in-progress.adoc[leveloffset=+1] + include::modules/op-automatic-pruning-taskrun-pipelinerun.adoc[leveloffset=+1] include::modules/op-default-pruner-configuration.adoc[leveloffset=+2] diff --git a/modules/op-configuring-pac-cancel-in-progress.adoc b/modules/op-configuring-pac-cancel-in-progress.adoc new file mode 100644 index 000000000000..5e08e0c062b4 --- /dev/null +++ b/modules/op-configuring-pac-cancel-in-progress.adoc @@ -0,0 +1,90 @@ +// Module included in the following assemblies: +// +// * install_config/customizing-configurations-in-the-tektonconfig-cr.adoc + +:_mod-docs-content-type: PROCEDURE +[id="op-configuring-pac-cancel-in-progress_{context}"] += Configuring automatic cancellation for Pipelines as Code + +[role="_abstract"] +You can configure {pac} to automatically cancel in-progress pipeline runs when new commits are pushed to a pull request or branch. This helps conserve resources and ensures that only the most recent code changes are being tested. + +.Prerequisites + +* You have access to an {OCP} cluster with cluster administrator permissions. +* You have installed the {pipelines-title} Operator. +* {pac} is enabled in your {pipelines-shortname} installation. + +.Procedure + +. In your `TektonConfig` custom resource, configure cancel-in-progress settings in the `spec.platforms.openshift.pipelinesAsCode.settings` section: ++ +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + platforms: + openshift: + pipelinesAsCode: + enable: true + settings: + enable-cancel-in-progress-on-pull-requests: "true" + enable-cancel-in-progress-on-push: "true" +# ... +---- ++ +where: ++ +`enable-cancel-in-progress-on-pull-requests`:: Specifies whether to automatically cancel in-progress pipeline runs when new commits are pushed to a pull request. Set to `true` to enable automatic cancellation. The default value is `false`. +`enable-cancel-in-progress-on-push`:: Specifies whether to automatically cancel in-progress pipeline runs when new commits are pushed to a branch. Set to `true` to enable automatic cancellation. The default value is `false`. + +. Save the changes and exit the editor. + +.Verification + +. Verify that the {pac} configuration is updated: ++ +[source,terminal] +---- +$ oc get configmap pipelines-as-code -n openshift-pipelines -o yaml +---- ++ +[source,yaml] +---- +apiVersion: v1 +data: + enable-cancel-in-progress-on-pull-requests: "true" + enable-cancel-in-progress-on-push: "true" +kind: ConfigMap +# ... +---- + +. Test the configuration by pushing multiple commits to a pull request or branch: +.. Create a pull request or push to a branch that triggers a pipeline run. +.. Before the pipeline run completes, push another commit to the same pull request or branch. +.. Verify that the first pipeline run is automatically canceled: ++ +[source,terminal] +---- +$ oc get pipelinerun -n --sort-by=.metadata.creationTimestamp +---- ++ +[source,terminal] +---- +NAME STATUS AGE +pipeline-run-abc Cancelled 5m +pipeline-run-xyz Running 1m +---- + +[IMPORTANT] +==== +Individual `PipelineRun` resources can override these global settings by using the `pipelinesascode.tekton.dev/cancel-in-progress` annotation. If this annotation is present on a `PipelineRun`, it takes precedence over the global `TektonConfig` settings. +==== + +[NOTE] +==== +When cancel-in-progress is enabled, older pipeline runs are canceled as soon as a new commit triggers a new pipeline run. This helps prevent wasting resources on testing outdated code but means that you might not have complete test results for every commit in a pull request. +==== diff --git a/modules/op-configuring-pipeline-resolver-timeouts.adoc b/modules/op-configuring-pipeline-resolver-timeouts.adoc new file mode 100644 index 000000000000..fea3abf9f949 --- /dev/null +++ b/modules/op-configuring-pipeline-resolver-timeouts.adoc @@ -0,0 +1,79 @@ +// Module included in the following assemblies: +// +// * install_config/customizing-configurations-in-the-tektonconfig-cr.adoc + +:_mod-docs-content-type: PROCEDURE +[id="op-configuring-pipeline-resolver-timeouts_{context}"] += Configuring pipeline resolver timeouts + +[role="_abstract"] +You can configure resolution timeout settings for pipeline resolvers to gain greater flexibility and control when running a pipeline. This enables you to set a global maximum timeout for resolution requests and configure resolver-specific timeouts. + +.Prerequisites + +* You have access to an {OCP} cluster with cluster administrator permissions. +* You have installed the {pipelines-shortname} Operator. + +.Procedure + +. In your `TektonConfig` custom resource, add or update the timeout settings in the `spec.pipeline.options.configMaps` section: ++ +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + pipeline: + options: + configMaps: + config-defaults: + data: + default-maximum-resolution-timeout: 5m + bundleresolver-config: + data: + fetch-timeout: 1m +# ... +---- ++ +where: ++ +`default-maximum-resolution-timeout`:: Specifies the global maximum timeout for resolution requests. The default value is `1m`. +`fetch-timeout`:: Specifies the timeout for bundle resolution requests. + +. Save the changes and exit the editor. + +.Verification + +. Verify that the timeout settings are applied: ++ +[source,terminal] +---- +$ oc get configmap config-defaults -n openshift-pipelines -o yaml +---- ++ +[source,terminal] +---- +apiVersion: v1 +data: + default-maximum-resolution-timeout: 5m +kind: ConfigMap +# ... +---- + +. Verify the bundle resolver configuration: ++ +[source,terminal] +---- +$ oc get configmap bundleresolver-config -n openshift-pipelines -o yaml +---- ++ +[source,terminal] +---- +apiVersion: v1 +data: + fetch-timeout: 1m +kind: ConfigMap +# ... +---- diff --git a/modules/op-configuring-resolver-caching.adoc b/modules/op-configuring-resolver-caching.adoc new file mode 100644 index 000000000000..9f6e85424b9f --- /dev/null +++ b/modules/op-configuring-resolver-caching.adoc @@ -0,0 +1,133 @@ +// Module included in the following assemblies: +// +// * install_config/customizing-configurations-in-the-tektonconfig-cr.adoc + +:_mod-docs-content-type: PROCEDURE +[id="op-configuring-resolver-caching_{context}"] += Configuring resolver caching + +[role="_abstract"] +You can configure resolver caching for bundle, Git, and cluster resolvers to reduce redundant fetches, minimize external API calls, and improve pipeline execution reliability. Caching is particularly useful when external services impose rate limits or are temporarily unavailable. + +.Prerequisites + +* You have access to an {OCP} cluster with cluster administrator permissions. +* You have installed the {pipelines-title} Operator. + +.Procedure + +. In your `TektonConfig` custom resource, configure global cache settings in the `spec.pipeline.options.configMaps.resolver-cache-config` section: ++ +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + pipeline: + options: + configMaps: + resolver-cache-config: + data: + max-size: "1000" + ttl: "5m" +# ... +---- ++ +where: ++ +`max-size`:: Specifies the maximum number of cached entries. The default value is `"1000"`. +`ttl`:: Specifies the time to live (TTL) of cache entries. The default value is `"5m"`. + +. Optional: Configure the default caching mode for specific resolvers by adding the `cache` parameter to resolver-specific config maps: ++ +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + pipeline: + bundleresolver-config: + cache: "auto" + git-resolver-config: + cache: "auto" + cluster-resolver-config: + cache: "auto" +# ... +---- ++ +where: ++ +`cache`:: Specifies the caching mode for the resolver. Valid values are: ++ +-- +* `auto`: Cache only immutable references, such as specific commit SHAs or image digests (default) +* `always`: Cache all resolved resources regardless of mutability +* `never`: Disable caching entirely +-- + +. Optional: Override the default caching mode for individual pipeline runs or task runs by adding the `cache` parameter to the run specification: ++ +[source,yaml] +---- +apiVersion: tekton.dev/v1 +kind: PipelineRun +metadata: + name: example-pipelinerun +spec: + pipelineRef: + resolver: git + params: + - name: url + value: https://github.com/example/repo.git + - name: revision + value: main + - name: pathInRepo + value: pipeline.yaml + - name: cache + value: "always" +# ... +---- + +. Save the changes and exit the editor. + +.Verification + +. Verify that the resolver cache configuration is applied: ++ +[source,terminal] +---- +$ oc get configmap resolver-cache-config -n openshift-pipelines -o yaml +---- ++ +[source,terminal] +---- +apiVersion: v1 +data: + max-size: "1000" + ttl: "5m" +kind: ConfigMap +# ... +---- + +. Check cache annotations on a resolved resource: ++ +[source,terminal] +---- +$ oc get pipelinerun -o yaml | grep -A 5 "resolution.tekton.dev" +---- ++ +[source,terminal] +---- +annotations: + resolution.tekton.dev/cache-hit: "true" + resolution.tekton.dev/cache-timestamp: "2024-01-15T10:30:00Z" +---- + +[NOTE] +==== +Resolver caching improves reliability by reducing external API calls and latency for frequently accessed resources. Cache hits, misses, and timestamps are recorded in resource annotations for observability. +==== diff --git a/modules/op-configuring-tekton-results-retention-policies.adoc b/modules/op-configuring-tekton-results-retention-policies.adoc new file mode 100644 index 000000000000..c1c02f8c5f2a --- /dev/null +++ b/modules/op-configuring-tekton-results-retention-policies.adoc @@ -0,0 +1,128 @@ +// Module included in the following assemblies: +// +// * install_config/customizing-configurations-in-the-tektonconfig-cr.adoc + +:_mod-docs-content-type: PROCEDURE +[id="op-configuring-tekton-results-retention-policies_{context}"] += Configuring fine-grained retention policies for Tekton Results + +[role="_abstract"] +You can configure fine-grained retention policies for {tekton-results} to set different retention periods for `PipelineRun` and `TaskRun` results based on namespace, labels, annotations, and execution status. This enables you to retain critical results longer while pruning less important data more aggressively. + +.Prerequisites + +* You have access to an {OCP} cluster with cluster administrator permissions. +* You have installed the {pipelines-title} Operator. +* {tekton-results} is enabled in your {pipelines-shortname} installation. + +.Procedure + +. In your `TektonConfig` custom resource, configure retention policies in the `spec.result.options.configMaps.tekton-results-config-results-retention-policy` section: ++ +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + result: + disabled: false + options: + configMaps: + tekton-results-config-results-retention-policy: + data: + defaultRetention: "30d" + policies: + - name: "retain-critical-failures-long-term" + selector: + matchNamespaces: + - "production" + - "prod-east" + matchLabels: + "criticality": ["high"] + matchStatuses: + - "Failed" + retention: "180d" + - name: "retain-annotated-for-debug" + selector: + matchAnnotations: + "debug/retain": ["true"] + retention: "14d" + - name: "default-production-policy" + selector: + matchNamespaces: + - "production" + - "prod-east" + retention: "60d" + - name: "short-term-ci-retention" + selector: + matchNamespaces: + - "ci" + retention: "10h" + runAt: "0 2 * * *" +# ... +---- ++ +where: ++ +`defaultRetention`:: Specifies the default retention period for results that do not match any policy. Common values include `30d` for 30 days or `7d` for 7 days. +`policies`:: Defines a list of retention policies. The first matching policy is applied to each result. +`policies.name`:: Specifies a descriptive name for the policy. +`policies.selector`:: Defines the criteria for matching `PipelineRun` and `TaskRun` resources: ++ +-- +* `matchNamespaces`: List of namespace names to match +* `matchLabels`: Key-value pairs of labels to match +* `matchAnnotations`: Key-value pairs of annotations to match +* `matchStatuses`: List of execution statuses to match (for example, `Failed`, `Succeeded`) +-- ++ +`policies.retention`:: Specifies the retention period for matching results. +`runAt`:: Specifies the cron schedule for running the pruning job. The default value is `"0 2 * * *"` (daily at 2:00 AM). + +. Save the changes and exit the editor. + +.Verification + +. Verify that the retention policy configuration is applied: ++ +[source,terminal] +---- +$ oc get configmap tekton-results-config-results-retention-policy -n openshift-pipelines -o yaml +---- ++ +[source,yaml] +---- +apiVersion: v1 +data: + defaultRetention: 30d + policies: + - name: retain-critical-failures-long-term + # ... + runAt: 0 2 * * * +kind: ConfigMap +# ... +---- + +. Check the pruning job schedule: ++ +[source,terminal] +---- +$ oc get cronjob -n openshift-pipelines | grep retention +---- ++ +[source,terminal] +---- +tekton-results-retention 0 2 * * * False 0 5d +---- + +[IMPORTANT] +==== +Policies are evaluated in order. The first matching policy determines the retention period for a result. If no policies match, the `defaultRetention` period is used. Structure your policies from most specific to least specific to ensure correct matching behavior. +==== + +[NOTE] +==== +Retention policies are applied to completed `PipelineRun` and `TaskRun` results. Running or pending resources are not affected by retention policies until they reach a terminal state. +==== diff --git a/modules/op-customizing-tekton-results-deployments.adoc b/modules/op-customizing-tekton-results-deployments.adoc new file mode 100644 index 000000000000..c825e7bb7a28 --- /dev/null +++ b/modules/op-customizing-tekton-results-deployments.adoc @@ -0,0 +1,83 @@ +// Module included in the following assemblies: +// +// * install_config/customizing-configurations-in-the-tektonconfig-cr.adoc + +:_mod-docs-content-type: PROCEDURE +[id="op-customizing-tekton-results-deployments_{context}"] += Customizing Tekton Results deployments + +[role="_abstract"] +You can customize {tekton-results} deployments by modifying deployment specifications in the `TektonConfig` custom resource. This enables you to add sidecar containers, modify resource limits, or configure additional deployment settings for {tekton-results} components. + +.Prerequisites + +* You have access to an {OCP} cluster with cluster administrator permissions. +* You have installed the {pipelines-shortname} Operator. +* {tekton-results} is enabled in your {pipelines-shortname} installation. + +.Procedure + +. In your `TektonConfig` custom resource, Add or update deployment customizations in the `spec.result.options.deployments` section. ++ +For example: ++ +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + result: + options: + deployments: + tekton-results-watcher: + spec: + template: + spec: + containers: + - name: kube-rbac-proxy + args: + - --secure-listen-address=0.0.0.0:8443 + - --upstream=http://127.0.0.1:9090/ + - --logtostderr=true + image: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.12 +# ... +---- ++ +where: ++ +`spec.result.options.deployments`:: Specifies the deployment customizations for {tekton-results} components. +`tekton-results-watcher`:: Specifies the name of the {tekton-results} deployment to customize. Other available deployments include `tekton-results-api` and `tekton-results-postgres`. +`spec.template.spec.containers`:: Specifies additional containers to add to the deployment, such as sidecar containers for monitoring, security, or logging. + +. Save the changes and exit the editor. + +.Verification + +. Verify that the deployment is updated: ++ +[source,terminal] +---- +$ oc get deployment tekton-results-watcher -n openshift-pipelines -o yaml +---- + +. Check that the custom container is running: ++ +[source,terminal] +---- +$ oc get pods -n openshift-pipelines -l app.kubernetes.io/name=tekton-results-watcher +---- ++ +[source,terminal] +---- +NAME READY STATUS RESTARTS AGE +tekton-results-watcher-xxxxxxxxx-xxxxx 2/2 Running 0 5m +---- ++ +The `READY` column shows `2/2`, indicating that both the main container and the custom sidecar container are running. + +[NOTE] +==== +Customizations to {tekton-results} deployments are applied when the {pipelines-title} Operator reconciles the `TektonConfig` custom resource. The operator automatically updates the deployment and restarts the pods to apply the changes. +==== diff --git a/modules/op-generating-cosign-key-pairs.adoc b/modules/op-generating-cosign-key-pairs.adoc new file mode 100644 index 000000000000..c11567a3c8d5 --- /dev/null +++ b/modules/op-generating-cosign-key-pairs.adoc @@ -0,0 +1,97 @@ +// Module included in the following assemblies: +// +// * install_config/customizing-configurations-in-the-tektonconfig-cr.adoc + +:_mod-docs-content-type: PROCEDURE +[id="op-generating-cosign-key-pairs_{context}"] += Generating cosign key pairs for Tekton Chains + +[role="_abstract"] +You can configure the {pipelines-title} Operator to automatically generate a `cosign` key pair for signing artifacts with {tekton-chains}. The operator generates both a private key (`cosign.key`) and a public key (`cosign.pub`) that can be used for image signing and verification. + +.Prerequisites + +* You have access to an {OCP} cluster with cluster administrator permissions. +* You have installed the {pipelines-title} Operator. + +.Procedure + +. Edit the `TektonConfig` custom resource: ++ +[source,terminal] +---- +$ oc edit tektonconfig config +---- + +. Enable automatic `cosign` key pair generation in the `spec.chain` section: ++ +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + chain: + disabled: false + generateSigningSecret: true +# ... +---- ++ +where: ++ +`chain.disabled`:: Specifies whether {tekton-chains} is enabled. Set to `false` to enable {tekton-chains}. +`chain.generateSigningSecret`:: Specifies whether to automatically generate a `cosign` key pair. Set to `true` to generate the key pair. + +. Save the changes and exit the editor. + +.Verification + +. Verify that the signing secret is created: ++ +[source,terminal] +---- +$ oc get secret signing-secrets -n openshift-pipelines +---- ++ +.Example output ++ +[source,terminal] +---- +NAME TYPE DATA AGE +signing-secrets Opaque 2 5m +---- + +. Check the contents of the signing secret: ++ +[source,terminal] +---- +$ oc get secret signing-secrets -n openshift-pipelines -o jsonpath='{.data}' | jq +---- ++ +.Example output ++ +[source,json] +---- +{ + "cosign.key": "", + "cosign.pub": "" +} +---- + +. Extract the public key for verification purposes: ++ +[source,terminal] +---- +$ oc get secret signing-secrets -n openshift-pipelines -o jsonpath='{.data.cosign\.pub}' | base64 -d > cosign.pub +---- + +[IMPORTANT] +==== +The generated private key (`cosign.key`) should be treated as sensitive data. Ensure that appropriate access controls are in place to protect the signing secret in the `openshift-pipelines` namespace. +==== + +[NOTE] +==== +If a signing secret already exists when you enable `generateSigningSecret`, the operator does not overwrite the existing secret. To regenerate the key pair, you must first delete the existing `signing-secrets` secret. +==== diff --git a/modules/op-performance-tuning-using-tektonconfig-cr.adoc b/modules/op-performance-tuning-using-tektonconfig-cr.adoc index 0a3ebe260e53..962f5493b87d 100644 --- a/modules/op-performance-tuning-using-tektonconfig-cr.adoc +++ b/modules/op-performance-tuning-using-tektonconfig-cr.adoc @@ -23,6 +23,7 @@ spec: threads-per-controller: 2 kube-api-qps: 5.0 kube-api-burst: 10 + statefulset-ordinals: false ---- All fields are optional. If you set them, the {pipelines-title} Operator includes most of the fields as arguments in the `openshift-pipelines-controller` deployment under the `openshift-pipelines-controller` container. The {pipelines-shortname} Operator also updates the `buckets` field in the `config-leader-election` config map under the `openshift-pipelines` namespace. @@ -40,6 +41,20 @@ In HA mode, {pipelines-shortname} uses several pods (replicas) to run these oper HA mode does not affect execution of task runs after creating the pods. +[id="ha-mode-options_{context}"] +== High availability mode options + +The {pipelines-shortname} controller supports two approaches for distributing workload across replicas in HA mode: + +Leader election (default):: Offers failover capabilities but might cause hot-spotting, where one replica handles a disproportionate amount of work. + +StatefulSet ordinals:: Ensures keys are evenly spread across replicas for a more balanced workload distribution. This approach uses StatefulSet pod ordinals to assign work consistently to specific replicas. ++ +-- +:FeatureName: Using StatefulSet ordinals for high availability +include::snippets/technology-preview.adoc[] +-- + .Modifiable fields for tuning {pipelines-shortname} performance [options="header"] |=== @@ -58,6 +73,8 @@ HA mode does not affect execution of task runs after creating the pods. | `kube-api-burst` | The maximum burst for a throttle. | `10` +| `statefulset-ordinals` | Enable StatefulSet ordinals for workload distribution as an alternative to leader election. When enabled, workload is distributed evenly across replicas using pod ordinals. | `false` + |=== [NOTE]