From 45390967656935f72a6ba04978bd07ea15a929f8 Mon Sep 17 00:00:00 2001 From: John Wilkins Date: Wed, 19 Nov 2025 17:10:21 -0800 Subject: [PATCH] Initial vale compability assessment for DITA conversion. Signed-off-by: John Wilkins --- _attributes/common-attributes.adoc | 2 + modules/op-about-finally_tasks.adoc | 52 ++++++---- modules/op-about-pipelinerun.adoc | 38 ++++--- modules/op-about-pipelines.adoc | 64 +++++++----- modules/op-about-podtemplate.adoc | 12 ++- modules/op-about-stepactions.adoc | 10 +- modules/op-about-taskrun.adoc | 40 +++++--- modules/op-about-tasks.adoc | 31 ++++-- modules/op-about-triggers.adoc | 148 ++++++++++++++++----------- modules/op-about-whenexpression.adoc | 49 +++++---- modules/op-about-workspace.adoc | 63 +++++++----- 11 files changed, 311 insertions(+), 198 deletions(-) diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index 096beb152916..eeb882b2f408 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: SNIPPET + // The {product-title} attribute provides the context-sensitive name of the relevant OpenShift distribution, for example, "OpenShift Container Platform" or "OKD". The {product-version} attribute provides the product version relative to the distribution, for example "4.9". // {product-title} and {product-version} are parsed when AsciiBinder queries the _distro_map.yml file in relation to the base branch of a pull request. // See https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#product-name-and-version for more information on this topic. diff --git a/modules/op-about-finally_tasks.adoc b/modules/op-about-finally_tasks.adoc index bca1241e7707..42d519b05d5c 100644 --- a/modules/op-about-finally_tasks.adoc +++ b/modules/op-about-finally_tasks.adoc @@ -1,48 +1,52 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-finally_tasks_{context}"] = Finally tasks -The `finally` tasks are the final set of tasks specified using the `finally` field in the pipeline YAML file. A `finally` task always executes the tasks within the pipeline, irrespective of whether the pipeline runs are executed successfully. The `finally` tasks are executed in parallel after all the pipeline tasks are run, before the corresponding pipeline exits. +[role="_abstract"] +You can use `finally` tasks to execute a final set of tasks in your pipeline regardless of whether the previous tasks succeed or fail. These tasks run in parallel after all other pipeline tasks finish, allowing you to perform cleanup or notification actions before the pipeline exits. -You can configure a `finally` task to consume the results of any task within the same pipeline. This approach does not change the order in which this final task is run. It is executed in parallel with other final tasks after all the non-final tasks are executed. +The `finally` tasks are the final set of tasks specified using the `finally` field in the pipeline YAML file. A `finally` task always runs the tasks within the pipeline, irrespective of whether the pipeline runs succeed. -The following example shows a code snippet of the `clone-cleanup-workspace` pipeline. This code clones the repository into a shared workspace and cleans up the workspace. After executing the pipeline tasks, the `cleanup` task specified in the `finally` section of the pipeline YAML file cleans up the workspace. +You can configure a `finally` task to consume the results of any task within the same pipeline. This approach does not change the order in which this final task runs. + +The following example shows a code snippet of the `clone-cleanup-workspace` pipeline. This code clones the repository into a shared workspace and cleans up the workspace. After the pipeline tasks finish, the `cleanup` task specified in the `finally` section of the pipeline YAML file cleans up the workspace. [source,yaml] ---- apiVersion: tekton.dev/v1 kind: Pipeline metadata: - name: clone-cleanup-workspace <1> + name: clone-cleanup-workspace spec: workspaces: - - name: git-source <2> + - name: git-source tasks: - - name: clone-app-repo <3> + - name: clone-app-repo taskRef: name: git-clone-from-catalog params: - name: url - value: https://github.com/tektoncd/community.git + value: git.example.com/tektoncd/community.git - name: subdirectory value: application workspaces: - name: output workspace: git-source finally: - - name: cleanup <4> - taskRef: <5> + - name: cleanup + taskRef: name: cleanup-workspace - workspaces: <6> + workspaces: - name: source workspace: git-source - name: check-git-commit - params: <7> + params: - name: commit value: $(tasks.clone-app-repo.results.commit) - taskSpec: <8> + taskSpec: params: - name: commit steps: @@ -53,11 +57,19 @@ spec: exit 1 fi ---- -<1> Unique name of the pipeline. -<2> The shared workspace where the git repository is cloned. -<3> The task to clone the application repository to the shared workspace. -<4> The task to clean-up the shared workspace. -<5> A reference to the task that is to be executed in the task run. -<6> A shared storage volume that a task in a pipeline needs at runtime to receive input or provide output. -<7> A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value. -<8> Embedded task definition. + +`metadata.name`:: Unique name of the pipeline. + +`spec.workspaces[0].name`:: The shared workspace where the git repository is copied. + +`spec.tasks[0].name`:: The task to clone the application repository to the shared workspace. + +`spec.finally[0].name`:: The task to clean-up the shared workspace. + +`spec.finally.taskRef`:: A reference to the task that runs in the task run. + +`spec.finally[0].name.workspaces`:: A shared storage volume that a task in a pipeline needs at runtime to receive input or offer output. + +`spec.finally[1].name.params`:: A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value. + +`spec.finally[1].name.taskSpec`:: Embedded task definition. \ No newline at end of file diff --git a/modules/op-about-pipelinerun.adoc b/modules/op-about-pipelinerun.adoc index 2005451b10ca..3d4989d4ef1b 100644 --- a/modules/op-about-pipelinerun.adoc +++ b/modules/op-about-pipelinerun.adoc @@ -1,33 +1,37 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-pipelinerun_{context}"] -= PipelineRun += Pipeline run + +[role="_abstract"] +You can use a `PipelineRun` resource to instantiate and execute a pipeline with specific inputs, outputs, and credentials. This resource binds a pipeline to a workspace and parameter values, enabling you to run your CI/CD workflow for a specific scenario. A `PipelineRun` is a type of resource that binds a pipeline, workspaces, credentials, and a set of parameter values specific to a scenario to run the CI/CD workflow. -A pipeline run is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a task run for each task in the pipeline run. +A pipeline run is the running instance of a pipeline. It also creates a task run for each task in the pipeline run. The pipeline runs the tasks sequentially until they are complete or a task fails. The `status` field tracks and the progress of each task run and stores it for monitoring and auditing purposes. The following example runs the `build-and-deploy` pipeline with relevant resources and parameters: [source,yaml] ---- -apiVersion: tekton.dev/v1 <1> -kind: PipelineRun <2> +apiVersion: tekton.dev/v1 +kind: PipelineRun metadata: - name: build-deploy-api-pipelinerun <3> + name: build-deploy-api-pipelinerun spec: pipelineRef: - name: build-and-deploy <4> - params: <5> + name: build-and-deploy + params: - name: deployment-name value: vote-api - name: git-url value: https://github.com/openshift-pipelines/vote-api.git - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api - workspaces: <6> + workspaces: - name: shared-workspace volumeClaimTemplate: spec: @@ -37,9 +41,15 @@ spec: requests: storage: 500Mi ---- -<1> Pipeline run API version `v1`. -<2> The type of Kubernetes object. In this example, `PipelineRun`. -<3> Unique name to identify this pipeline run. -<4> Name of the pipeline to be run. In this example, `build-and-deploy`. -<5> The list of parameters required to run the pipeline. -<6> Workspace used by the pipeline run. + +`apiVersion`:: Pipeline run API version `v1`. + +`kind`:: The type of Kubernetes object. In this example, `PipelineRun`. + +`metadata.name`:: Unique name to identify this pipeline run. + +`spec.pipelineRef.name`:: Name of the pipeline to run. In this example, `build-and-deploy`. + +`spec.params`:: The list of parameters required to run the pipeline. + +`spec.workspaces`:: Workspace used by the pipeline run. \ No newline at end of file diff --git a/modules/op-about-pipelines.adoc b/modules/op-about-pipelines.adoc index 9afbd96a26e9..99adff82dd58 100644 --- a/modules/op-about-pipelines.adoc +++ b/modules/op-about-pipelines.adoc @@ -1,25 +1,29 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-pipelines_{context}"] = Pipelines -A `Pipeline` is a collection of `Task` resources arranged in a specific order of execution. They are executed to construct complex workflows that automate the build, deployment and delivery of applications. You can define a CI/CD workflow for your application using pipelines containing one or more tasks. +[role="_abstract"] +You can use a `Pipeline` resource to arrange a collection of tasks in a specific order of execution. By defining a pipeline, you construct complex workflows that automate the build, deployment, and delivery of your applications. -A `Pipeline` resource definition consists of a number of fields or attributes, which together enable the pipeline to accomplish a specific goal. Each `Pipeline` resource definition must contain at least one `Task` resource, which ingests specific inputs and produces specific outputs. The pipeline definition can also optionally include _Conditions_, _Workspaces_, _Parameters_, or _Resources_ depending on the application requirements. +A `Pipeline` is a collection of `Task` resources arranged in a specific order of execution. You run them to construct complex workflows that automate the build, deployment and delivery of applications by using one or more tasks to define a CI/CD workflow for your application. -The following example shows the `build-and-deploy` pipeline, which builds an application image from a Git repository using the `buildah` task provided in the `openshift-pipelines` namespace: +A `Pipeline` resource definition consists of several fields or attributes, which together enable the pipeline to run a specific goal. Each `Pipeline` resource definition must contain at least one `Task` resource, which obtains specific inputs and produces specific outputs. The pipeline definition can also optionally include several _Conditions_, _Workspaces_, _Parameters_, or _Resources_ depending on the application requirements. + +The following example shows the `build-and-deploy` pipeline, which builds an application image from a Git repository by using the `buildah` task provided in the `openshift-pipelines` namespace: [source,yaml,subs="attributes+"] ---- -apiVersion: tekton.dev/v1 <1> -kind: Pipeline <2> +apiVersion: tekton.dev/v1 +kind: Pipeline metadata: - name: build-and-deploy <3> -spec: <4> - workspaces: <5> + name: build-and-deploy +spec: + workspaces: - name: shared-workspace - params: <6> + params: - name: deployment-name type: string description: name of the deployment to be patched @@ -33,7 +37,7 @@ spec: <4> - name: IMAGE type: string description: image to be built from the code - tasks: <7> + tasks: - name: fetch-repository taskRef: resolver: cluster @@ -56,7 +60,7 @@ spec: <4> value: "true" - name: REVISION value: $(params.git-revision) - - name: build-image <8> + - name: build-image taskRef: resolver: cluster params: @@ -76,13 +80,13 @@ spec: <4> value: $(params.IMAGE) runAfter: - fetch-repository - - name: apply-manifests <9> + - name: apply-manifests taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace - runAfter: <10> + runAfter: - build-image - name: update-deployment taskRef: @@ -98,18 +102,28 @@ spec: <4> runAfter: - apply-manifests ---- -<1> Pipeline API version `v1`. -<2> Specifies the type of Kubernetes object. In this example, `Pipeline`. -<3> Unique name of this pipeline. -<4> Specifies the definition and structure of the pipeline. -<5> Workspaces used across all the tasks in the pipeline. -<6> Parameters used across all the tasks in the pipeline. -<7> Specifies the list of tasks used in the pipeline. -<8> Task `build-image`, which uses the `buildah` task provided in the `openshift-pipelines` namespace to build application images from a given Git repository. -<9> Task `apply-manifests`, which uses a user-defined task with the same name. -<10> Specifies the sequence in which tasks are run in a pipeline. In this example, the `apply-manifests` task is run only after the `build-image` task is completed. + +`apiVersion`:: Pipeline API version `v1`. + +`kind`:: Specifies the type of Kubernetes object. In this example, `Pipeline`. + +`metadata.name`:: Unique name of this pipeline. + +`spec`:: Specifies the definition and structure of the pipeline. + +`spec.workspaces`:: Workspaces used across all the tasks in the pipeline. + +`spec.params`:: Parameters used across all the tasks in the pipeline. + +`tasks[0].name`:: Specifies the list of tasks used in the pipeline. + +`tasks[1].name`:: Task `build-image`, which uses the `buildah` task provided in the `openshift-pipelines` namespace to build application images from a given Git repository. + +`tasks[2].name`:: Task `apply-manifests`, which uses a user-defined task with the same name. + +`tasks[2].name.runAfter`:: Specifies the sequence in which tasks run in a pipeline. In this example, the `apply-manifests` task runs only after the `build-image` task finishes. [NOTE] ==== -The {pipelines-title} Operator installs the Buildah task in the `openshift-pipelines` namespace and creates the `pipeline` service account with sufficient permission to build and push an image. The Buildah task can fail when associated with a different service account with insufficient permissions. -==== +The {pipelines-title} Operator installs the Buildah task in the `openshift-pipelines` namespace and creates the `pipeline` service account with enough permissions to build and push an image. The Buildah task can fail when associated with a different service account with insufficient permissions. +==== \ No newline at end of file diff --git a/modules/op-about-podtemplate.adoc b/modules/op-about-podtemplate.adoc index 46bcd761bd3f..802e4e3b74b2 100644 --- a/modules/op-about-podtemplate.adoc +++ b/modules/op-about-podtemplate.adoc @@ -1,12 +1,16 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-podtemplate_{context}"] = Pod templates -Optionally, you can define a _pod template_ in a `PipelineRun` or `TaskRun` custom resource (CR). You can use any parameters available for a `Pod` CR in the pod template. When creating pods for executing the pipeline or task, {pipelines-shortname} sets these parameters for every pod. +[role="_abstract"] +You can define a pod template in a `PipelineRun` or `TaskRun` custom resource (CR) to configure the pods that execute your tasks. This allows you to set specific parameters, such as security contexts or user IDs, for every pod created during the pipeline or task run. -For example, you can use a pod template to make the pod execute as a user and not as root. +Optionally, you can define a _pod template_ in a `PipelineRun` or `TaskRun` custom resource (CR). You can use any parameters available for a `Pod` CR in the pod template. When creating pods for running the pipeline or task, {pipelines-shortname} sets these parameters for every pod. + +For example, you can use a pod template to make the pod run as a user and not as root. For a pipeline run, you can define a pod template in the `pipelineRunTemplate.podTemplate` spec, as in the following example: @@ -29,7 +33,7 @@ spec: [NOTE] ==== -In the earlier API version `v1beta1`, the pod template for a `PipelineRun` CR was specified as `podTemplate` directly in the `spec:` section. This format is not supported in the `v1` API. +In the earlier API version `v1beta1`, the pod template for a `PipelineRun` CR defined `podTemplate` directly in the `spec:` section. This format is not supported in the `v1` API. ==== For a task run, you can define a pod template in the `podTemplate` spec, as in the following example: @@ -50,4 +54,4 @@ spec: securityContext: runAsNonRoot: true runAsUser: 1001 ----- +---- \ No newline at end of file diff --git a/modules/op-about-stepactions.adoc b/modules/op-about-stepactions.adoc index 386a9282c364..c50a7b4aa5b8 100644 --- a/modules/op-about-stepactions.adoc +++ b/modules/op-about-stepactions.adoc @@ -5,9 +5,12 @@ [id="about-stepactions_{context}"] = Step actions +[role="_abstract"] +You can use a `StepAction` custom resource (CR) to define a reusable action that a step performs. By referencing a `StepAction` object from a step, you can share and reuse action definitions across multiple tasks or reference actions from external sources. + A step is a part of a task. If you define a step in a task, you cannot reference this step from another task. -However, you can optionally define a _step action_ in a `StepAction` custom resource (CR). This CR contains the action that a step performs. You can reference a `StepAction` object from a step to create a step that performs the action. You can also use resolvers to reference a `StepAction` definition that is available from an external source. +This CR contains the action that a step performs. You can reference a `StepAction` object from a step to create a step that performs the action. You can also use resolvers to reference a `StepAction` definition that is available from an external source. The following examples shows a `StepAction` CR named `apply-manifests-action`. This step action applies manifests from a source tree to your {OCP} environment: @@ -21,7 +24,7 @@ spec: params: - name: working_dir description: The working directory where the source is located - type: string # <1> + type: string default: "/workspace/source" - name: manifest_dir description: The directory in source that contains yaml manifests @@ -38,7 +41,8 @@ spec: #!/usr/bin/env bash oc apply -f "$MANIFEST_DIR" | tee $(results.output) ---- -<1> The `type` specification for a parameter is optional. + +`spec.params[n].name.type`:: The `type` specification for a parameter is optional. The `StepAction` CR does not include definitions of workspaces. Instead, the step action expects that the task that includes the action also provides the mounted source tree, typically using a workspace. diff --git a/modules/op-about-taskrun.adoc b/modules/op-about-taskrun.adoc index 760023a95d78..05f2e1b49e8c 100644 --- a/modules/op-about-taskrun.adoc +++ b/modules/op-about-taskrun.adoc @@ -1,34 +1,44 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-taskrun_{context}"] -= TaskRun += Task run -A `TaskRun` instantiates a task for execution with specific inputs, outputs, and execution parameters on a cluster. It can be invoked on its own or as part of a pipeline run for each task in a pipeline. +[role="_abstract"] +You can use a `TaskRun` resource to instantiate and execute a task with specific inputs, outputs, and execution parameters on a cluster. You can initiate a task run independently or as part of a pipeline run to execute the steps defined in a task. -A task consists of one or more steps that execute container images, and each container image performs a specific piece of build work. A task run executes the steps in a task in the specified order, until all steps execute successfully or a failure occurs. A `TaskRun` is automatically created by a `PipelineRun` for each task in a pipeline. +A `TaskRun` instantiates a task for starting with specific inputs, outputs, and execution parameters on a cluster. You can start it on its own or as part of a pipeline run for each task in a pipeline. + +A task consists of one or more steps that run container images, and each container image performs a specific piece of build work. A task run starts the steps in a task in the specified order, until all steps run successfully or a failure occurs. A `TaskRun` is automatically created by a `PipelineRun` for each task in a pipeline. The following example shows a task run that runs the `apply-manifests` task with the relevant input parameters: [source,yaml] ---- -apiVersion: tekton.dev/v1 <1> -kind: TaskRun <2> +apiVersion: tekton.dev/v1 +kind: TaskRun metadata: - name: apply-manifests-taskrun <3> -spec: <4> + name: apply-manifests-taskrun +spec: taskRunTemplate: serviceAccountName: pipeline - taskRef: <5> + taskRef: kind: Task name: apply-manifests - workspaces: <6> + workspaces: - name: source persistentVolumeClaim: claimName: source-pvc ---- -<1> The task run API version `v1`. -<2> Specifies the type of Kubernetes object. In this example, `TaskRun`. -<3> Unique name to identify this task run. -<4> Definition of the task run. For this task run, the task and the required workspace are specified. -<5> Name of the task reference used for this task run. This task run executes the `apply-manifests` task. -<6> Workspace used by the task run. + +`apiVersion`:: The task run API version `v1`. + +`kind`:: Specifies the type of Kubernetes object. In this example, `TaskRun`. + +`metadata.name`:: Unique name to identify this task run. + +`spec`:: Definition of the task run. For this task run, the task and the required workspace you define. + +`spec.taskRef`:: Name of the task reference used for this task run. This task run runs the `apply-manifests` task. + +`spec.workspaces`:: Workspace used by the task run. \ No newline at end of file diff --git a/modules/op-about-tasks.adoc b/modules/op-about-tasks.adoc index f6370a5626a6..9c29dab4926e 100644 --- a/modules/op-about-tasks.adoc +++ b/modules/op-about-tasks.adoc @@ -1,10 +1,14 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-tasks_{context}"] = Tasks -`Task` resources are the building blocks of a pipeline and consist of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. Tasks are reusable and can be used in multiple pipelines. +[role="_abstract"] +You can use `Task` resources as the building blocks of a pipeline to define a set of sequentially executed steps. Each task functions as a reusable unit of work with specific inputs and outputs, capable of running individually or as part of a larger pipeline. + +`Task` resources are the building blocks of a pipeline and consist of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. You can reuse tasks in many pipelines. _Steps_ are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs as a container within that pod. Because steps run within the same pod, they can access the same volumes for caching files, config maps, and secrets. @@ -12,11 +16,11 @@ The following example shows the `apply-manifests` task. [source,yaml] ---- -apiVersion: tekton.dev/v1 <1> -kind: Task <2> +apiVersion: tekton.dev/v1 +kind: Task metadata: - name: apply-manifests <3> -spec: <4> + name: apply-manifests +spec: workspaces: - name: source params: @@ -35,16 +39,20 @@ spec: <4> oc apply -f $(params.manifest_dir) echo ----------------------------------- ---- -<1> The task API version, `v1`. -<2> The type of Kubernetes object, `Task`. -<3> The unique name of this task. -<4> The list of parameters and steps in the task and the workspace used by the task. -This task starts the pod and runs a container inside that pod using the specified image to run the specified commands. +`apiVersion`:: The task API version, `v1`. + +`kind`:: The type of Kubernetes object, `Task`. + +`metadata.name`:: The unique name of this task. + +`spec`:: The list of parameters and steps in the task and the workspace used by the task. + +This task starts the pod and runs a container inside that pod by using the specified image to run the specified commands. [NOTE] ==== -Starting with {pipelines-shortname} 1.6, the following defaults from the step YAML file are removed: +Starting with {pipelines-shortname} 1.6, the step YAML file no longer has the following defaults: * The `HOME` environment variable does not default to the `/tekton/home` directory * The `workingDir` field does not default to the `/workspace` directory @@ -52,6 +60,7 @@ Starting with {pipelines-shortname} 1.6, the following defaults from the step YA Instead, the container for the step defines the `HOME` environment variable and the `workingDir` field. However, you can override the default values by specifying the custom values in the YAML file for the step. As a temporary measure, to maintain backward compatibility with the older {pipelines-shortname} versions, you can set the following fields in the `TektonConfig` custom resource definition to `false`: + ``` spec: pipeline: diff --git a/modules/op-about-triggers.adoc b/modules/op-about-triggers.adoc index e64a0ee593fa..cce09a866404 100644 --- a/modules/op-about-triggers.adoc +++ b/modules/op-about-triggers.adoc @@ -1,28 +1,29 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-triggers_{context}"] = Triggers -Use _Triggers_ in conjunction with pipelines to create a full-fledged CI/CD system where Kubernetes resources define the entire CI/CD execution. Triggers capture the external events, such as a Git pull request, and process them to extract key pieces of information. Mapping this event data to a set of predefined parameters triggers a series of tasks that can then create and deploy Kubernetes resources and instantiate the pipeline. +[role="_abstract"] +You can use Triggers in conjunction with pipelines to create a comprehensive CI/CD system driven by Kubernetes resources. Triggers capture external events, such as Git pull requests, and process them to extract information, enabling you to automatically instantiate pipelines and deploy resources based on event data. -For example, you define a CI/CD workflow using {pipelines-title} for your application. The pipeline must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change event and by triggering a pipeline run that deploys the new image with the latest changes. +For example, you define a CI/CD workflow by using {pipelines-title} for your application. The pipeline must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change event and by triggering a pipeline run that deploys the new image with the latest changes. Triggers consist of the following main resources that work together to form a reusable, decoupled, and self-sustaining CI/CD system: --- * The `TriggerBinding` resource extracts the fields from an event payload and stores them as parameters. + -The following example shows a code snippet of the `TriggerBinding` resource, which extracts the Git repository information from the received event payload: +The following example shows a code snippet of the `TriggerBinding` resource, from which you extract the Git repository information from the received event payload: + [source,yaml] ---- -apiVersion: triggers.tekton.dev/v1beta1 <1> -kind: TriggerBinding <2> +apiVersion: triggers.tekton.dev/v1beta1 +kind: TriggerBinding metadata: - name: vote-app <3> + name: vote-app spec: - params: <4> + params: - name: git-repo-url value: $(body.repository.url) - name: git-repo-name @@ -30,25 +31,27 @@ spec: - name: git-revision value: $(body.head_commit.id) ---- -+ -<1> The API version of the `TriggerBinding` resource. In this example, `v1beta1`. -<2> Specifies the type of Kubernetes object. In this example, `TriggerBinding`. -<3> Unique name to identify the `TriggerBinding` resource. -<4> List of parameters which will be extracted from the received event payload and passed to the `TriggerTemplate` resource. In this example, the Git repository URL, name, and revision are extracted from the body of the event payload. -* The `TriggerTemplate` resource acts as a standard for the way resources must be created. It specifies the way parameterized data from the `TriggerBinding` resource should be used. -A trigger template receives input from the trigger binding, and then performs a series of actions that results in creation of new pipeline resources, and initiation of a new pipeline run. +`apiVersion`:: The API version of the `TriggerBinding` resource. In this example, `v1beta1`. + +`kind`:: Specifies the type of Kubernetes object. In this example, `TriggerBinding`. + +`metadata.name`:: A unique name to identify the `TriggerBinding` resource. + +`spec.params`:: The list of parameters extracted from the received event payload and passed to the `TriggerTemplate` resource. In this example, the `TriggerTemplate` extracts the Git repository URL, name, and revision from the body of the event payload. + +* The `TriggerTemplate` resource acts as a standard for creating resources. It specifies how the parameterized data from the `TriggerBinding` resource defines the new resources. A trigger template receives input from the trigger binding, then performs a series of actions that results in the creation of new pipeline resources and the initiation of a new pipeline run. + -The following example shows a code snippet of a `TriggerTemplate` resource, which creates a pipeline run using the Git repository information received from the `TriggerBinding` resource you just created: +The following example shows a code snippet of a `TriggerTemplate` resource, which creates a pipeline run by using the Git repository information received from the `TriggerBinding` resource you just created: + [source,yaml,subs="attributes+"] ---- -apiVersion: triggers.tekton.dev/v1beta1 <1> -kind: TriggerTemplate <2> +apiVersion: triggers.tekton.dev/v1beta1 +kind: TriggerTemplate metadata: - name: vote-app <3> + name: vote-app spec: - params: <4> + params: - name: git-repo-url description: The git repository url - name: git-revision @@ -57,7 +60,7 @@ spec: - name: git-repo-name description: The name of the deployment to be created / patched - resourcetemplates: <5> + resourcetemplates: - apiVersion: tekton.dev/v1 kind: PipelineRun metadata: @@ -86,33 +89,37 @@ spec: requests: storage: 500Mi ---- -+ -<1> The API version of the `TriggerTemplate` resource. In this example, `v1beta1`. -<2> Specifies the type of Kubernetes object. In this example, `TriggerTemplate`. -<3> Unique name to identify the `TriggerTemplate` resource. -<4> Parameters supplied by the `TriggerBinding` resource. -<5> List of templates that specify the way resources must be created using the parameters received through the `TriggerBinding` or `EventListener` resources. + +`apiVersion`:: The API version of the `TriggerTemplate` resource. In this example, `v1beta1`. + +`kind`:: Specifies the type of Kubernetes object. In this example, `TriggerTemplate`. + +`metadata.name`:: Unique name to identify the `TriggerTemplate` resource. + +`spec.params`:: Parameters supplied by the `TriggerBinding` resource. + +`resourcetemplates`:: List of templates that specify how you create resources by using the parameters received through the `TriggerBinding` or `EventListener` resources. * The `Trigger` resource combines the `TriggerBinding` and `TriggerTemplate` resources, and optionally, the `interceptors` event processor. + -Interceptors process all the events for a specific platform that runs before the `TriggerBinding` resource. You can use interceptors to filter the payload, verify events, define and test trigger conditions, and implement other useful processing. Interceptors use secret for event verification. After the event data passes through an interceptor, it then goes to the trigger before you pass the payload data to the trigger binding. You can also use an interceptor to modify the behavior of the associated trigger referenced in the `EventListener` specification. -//image::op-triggers.png[] +Interceptors process all the events for a specific platform that runs before the `TriggerBinding` resource. You can use interceptors to filter the payload, verify events, define and test trigger conditions, and implement other useful processing. Interceptors use secret for event verification. After the event data passes through an interceptor, it then goes to the trigger before you pass the payload data to the trigger binding. You can also use an interceptor to change the behavior of the associated trigger referenced in the `EventListener` specification. +//image::op-triggers.png[alt="Diagram showing the flow of an event through the EventListener, Interceptors, Trigger, TriggerBinding, and finally creating a PipelineRun by using the TriggerTemplate.", width="100%"] //It seems like this was removed. It's not in the image directory. + The following example shows a code snippet of a `Trigger` resource, named `vote-trigger` that connects the `TriggerBinding` and `TriggerTemplate` resources, and the `interceptors` event processor. + [source,yaml] ---- -apiVersion: triggers.tekton.dev/v1beta1 <1> -kind: Trigger <2> +apiVersion: triggers.tekton.dev/v1beta1 +kind: Trigger metadata: - name: vote-trigger <3> + name: vote-trigger spec: taskRunTemplate: - serviceAccountName: pipeline <4> + serviceAccountName: pipeline interceptors: - ref: - name: "github" <5> - params: <6> + name: "github" + params: - name: "secretRef" value: secretName: github-secret @@ -120,49 +127,66 @@ spec: - name: "eventTypes" value: ["push"] bindings: - - ref: vote-app <7> - template: <8> + - ref: vote-app + template: ref: vote-app --- apiVersion: v1 -kind: Secret <9> +kind: Secret metadata: name: github-secret type: Opaque stringData: secretToken: "1234567" ---- -+ -<1> The API version of the `Trigger` resource. In this example, `v1beta1`. -<2> Specifies the type of Kubernetes object. In this example, `Trigger`. -<3> Unique name to identify the `Trigger` resource. -<4> Service account name to be used. -<5> Interceptor name to be referenced. In this example, `github`. -<6> Desired parameters to be specified. -<7> Name of the `TriggerBinding` resource to be connected to the `TriggerTemplate` resource. -<8> Name of the `TriggerTemplate` resource to be connected to the `TriggerBinding` resource. -<9> Secret to be used to verify events. - -* The `EventListener` resource provides an endpoint, or an event sink, that listens for incoming HTTP-based events with a JSON payload. It extracts event parameters from each `TriggerBinding` resource, and then processes this data to create Kubernetes resources as specified by the corresponding `TriggerTemplate` resource. The `EventListener` resource also performs lightweight event processing or basic filtering on the payload using event `interceptors`, which identify the type of payload and optionally modify it. Currently, pipeline triggers support five types of interceptors: _Webhook Interceptors_, _GitHub Interceptors_, _GitLab Interceptors_, _Bitbucket Interceptors_, and _Common Expression Language (CEL) Interceptors_. + +`apiVersion`:: The API version of the `Trigger` resource. In this example, `v1beta1`. + +`kind`:: Specifies the type of Kubernetes object. In this example, `Trigger`. + +`metadata.name`:: A unique name to identify the `Trigger` resource. + +`serviceAcccountName`:: The service account name to use. + +`interceptors.ref.name`:: Interceptor name to reference. In this example, `github`. + +`interceptors.params`:: The required parameters to specify. + +`bindings.ref`:: The name of the `TriggerBinding` resource to connect to the `TriggerTemplate` resource. + +`template.ref`:: The name of the `TriggerTemplate` resource to connect to the `TriggerBinding` resource. + +`kind`:: The Secret to use to verify events. + +* The `EventListener` resource provides an endpoint, or an event sink, that listens for incoming HTTP-based events with a JSON payload. You extract event parameters from each `TriggerBinding` resource, and then process this data to create Kubernetes resources as specified by the corresponding `TriggerTemplate` resource. The `EventListener` resource also performs lightweight event processing or basic filtering on the payload by using event `interceptors`, which identify the type of payload and optionally change it. Currently, pipeline triggers support five types of interceptors: + +** Webhook Interceptors +** GitHub Interceptors +** GitLab Interceptors +** Bitbucket Interceptors, and +** Common Expression Language (CEL) Interceptors. + The following example shows an `EventListener` resource, which references the `Trigger` resource named `vote-trigger`. + [source,yaml] ---- -apiVersion: triggers.tekton.dev/v1beta1 <1> -kind: EventListener <2> +apiVersion: triggers.tekton.dev/v1beta1 +kind: EventListener metadata: - name: vote-app <3> + name: vote-app spec: taskRunTemplate: - serviceAccountName: pipeline <4> + serviceAccountName: pipeline triggers: - - triggerRef: vote-trigger <5> + - triggerRef: vote-trigger ---- -+ -<1> The API version of the `EventListener` resource. In this example, `v1beta1`. -<2> Specifies the type of Kubernetes object. In this example, `EventListener`. -<3> Unique name to identify the `EventListener` resource. -<4> Service account name to be used. -<5> Name of the `Trigger` resource referenced by the `EventListener` resource. --- + +`apiVersion`:: The API version of the `EventListener` resource. In this example, `v1beta1`. + +`kind`:: Specifies the type of Kubernetes object. In this example, `EventListener`. + +`metadata.name`:: A unique name to identify the `EventListener` resource. + +`spec.taskRunTemplate.serviceAccountName`:: The service account name to use. + +`spec.triggers.triggerRef`:: The name of the `Trigger` resource that the `EventListener` resource references. diff --git a/modules/op-about-whenexpression.adoc b/modules/op-about-whenexpression.adoc index 112a452d3b6a..629f9476560a 100644 --- a/modules/op-about-whenexpression.adoc +++ b/modules/op-about-whenexpression.adoc @@ -1,10 +1,14 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-whenexpression_{context}"] = When expression -When expressions guard task execution by setting criteria for the execution of tasks within a pipeline. They contain a list of components that allows a task to run only when certain criteria are met. When expressions are also supported in the final set of tasks that are specified using the `finally` field in the pipeline YAML file. +[role="_abstract"] +You can use 'when' expressions to guard task execution by defining specific criteria that must be met before a task runs. These expressions allow you to control the flow of your pipeline, including the execution of tasks in the `finally` section, based on static inputs, variables, or results from previous tasks. + +When expressions guard task execution by setting criteria for running tasks within a pipeline. They contain a list of components that allow a task to run only when certain criteria are met meet the specified conditions. You can also include when expressions in the final set of tasks that you specify by using the `finally` field in the pipeline YAML file. The key components of a when expression are as follows: @@ -12,21 +16,21 @@ The key components of a when expression are as follows: * `operator`: Specifies the relationship of an input to a set of `values`. Enter `in` or `notin` as your operator values. * `values`: Specifies an array of string values. Enter a non-empty array of static values or variables such as parameters, results, and a bound state of a workspace. -The declared when expressions are evaluated before the task is run. If the value of a when expression is `True`, the task is run. If the value of a when expression is `False`, the task is skipped. +The declared when expressions evaluate before the task runs. If the when expression evaluates to `True`, the task runs. If the when expression evaluates to `False`, the task skips. You can use the when expressions in various use cases. For example, whether: -* The result of a previous task is as expected. -* A file in a Git repository has changed in the previous commits. +* The result of a preceding task is as expected. +* A file in a Git repository has changed in the earlier commits. * An image exists in the registry. * An optional workspace is available. -The following example shows the when expressions for a pipeline run. The pipeline run will execute the `create-file` task only if the following criteria are met: the `path` parameter is `README.md`, and the `echo-file-exists` task executed only if the `exists` result from the `check-file` task is `yes`. +The following example shows the when expressions for a pipeline run. The pipeline run will run the `create-file` task only if the following criteria are met meet the specified conditions: the `path` parameter is `README.md`, and the `echo-file-exists` task runs only if the `exists` result from the `check-file` task is `yes`. [source,yaml] ---- apiVersion: tekton.dev/v1 -kind: PipelineRun <1> +kind: PipelineRun metadata: generateName: guarded-pr- spec: @@ -42,7 +46,7 @@ spec: description: | This workspace is shared among all the pipeline tasks to read/write common resources tasks: - - name: create-file <2> + - name: create-file when: - input: "$(params.path)" operator: in @@ -86,7 +90,7 @@ spec: printf no | tee /tekton/results/exists fi - name: echo-file-exists - when: <3> + when: - input: "$(tasks.check-file.results.exists)" operator: in values: ["yes"] @@ -97,7 +101,7 @@ spec: script: 'echo file exists' ... - name: task-should-be-skipped-1 - when: <4> + when: - input: "$(params.path)" operator: notin values: ["README.md"] @@ -109,7 +113,7 @@ spec: ... finally: - name: finally-task-should-be-executed - when: <5> + when: - input: "$(tasks.echo-file-exists.status)" operator: in values: ["Succeeded"] @@ -140,16 +144,23 @@ spec: requests: storage: 16Mi ---- -<1> Specifies the type of Kubernetes object. In this example, `PipelineRun`. -<2> Task `create-file` used in the pipeline. -<3> `when` expression that specifies to execute the `echo-file-exists` task only if the `exists` result from the `check-file` task is `yes`. -<4> `when` expression that specifies to skip the `task-should-be-skipped-1` task only if the `path` parameter is `README.md`. -<5> `when` expression that specifies to execute the `finally-task-should-be-executed` task only if the execution status of the `echo-file-exists` task and the task status is `Succeeded`, the `exists` result from the `check-file` task is `yes`, and the `path` parameter is `README.md`. + +`kind`:: Specifies the type of Kubernetes object. In this example, `PipelineRun`. + +`tasks[0].name`:: Task `create-file` used in the pipeline. + +`tasks[2].name.when`:: The `when` expression that specifies to run the `echo-file-exists` task only if the `exists` result from the `check-file` task is `yes`. + +`tasks[3].name.when`:: The `when` expression that specifies to skip the `task-should-be-skipped-1` task only if the `path` parameter is `README.md`. + +`tasks[4].name.when`:: The `when` expression that specifies to run the `finally-task-should-be-executed` task only if the execution status of the `echo-file-exists` task and the task status is `Succeeded`, the `exists` result from the `check-file` task is `yes`, and the `path` parameter is `README.md`. The *Pipeline Run details* page of the {OCP} web console shows the status of the tasks and when expressions as follows: -* All the criteria are met: Tasks and the when expression symbol, which is represented by a diamond shape are green. -* Any one of the criteria are not met: Task is skipped. Skipped tasks and the when expression symbol are grey. -* None of the criteria are met: Task is skipped. Skipped tasks and the when expression symbol are grey. -* Task run fails: Failed tasks and the when expression symbol are red. +* All the criteria are met meet the specified conditions: Tasks and the when expression symbol, which is represented appears as a diamond shape, appear in a **success** state. + +* Any one of the criteria are not met does not meet the condition: The Task skips. Skipped tasks and the when expression symbol appear in a **skipped** state. + +* None of the criteria are met meet the specified conditions: The Task skips. Skipped tasks and the when expression symbol appear in a **skipped** state. +* Task run fails: Failed tasks and the when expression symbol appear in a **failed** state. \ No newline at end of file diff --git a/modules/op-about-workspace.adoc b/modules/op-about-workspace.adoc index 557f15ca3f20..f8843948319c 100644 --- a/modules/op-about-workspace.adoc +++ b/modules/op-about-workspace.adoc @@ -1,15 +1,19 @@ // This module is included in the following assemblies: // * about/understanding-openshift-pipelines.adoc +:_mod-docs-content-type: CONCEPT [id="about-workspaces_{context}"] = Workspaces +[role="_abstract"] +You can use workspaces to declare shared storage volumes that tasks in a pipeline need for input or output at runtime. By separating volume declaration from runtime storage, workspaces allow you to specify the filesystem location independently, making your tasks reusable and flexible across different environments. + [NOTE] ==== -It is recommended that you use workspaces instead of the `PipelineResource` CRs in {pipelines-title}, as `PipelineResource` CRs are difficult to debug, limited in scope, and make tasks less reusable. +We recommend that you use workspaces instead of the `PipelineResource` CRs in {pipelines-title}, as `PipelineResource` CRs are difficult to debug, limited in scope, and make tasks less reusable. ==== -Workspaces declare shared storage volumes that a task in a pipeline needs at runtime to receive input or provide output. Instead of specifying the actual location of the volumes, workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. A task or pipeline declares the workspace and you must provide the specific location details of the volume. It is then mounted into that workspace in a task run or a pipeline run. This separation of volume declaration from runtime storage volumes makes the tasks reusable, flexible, and independent of the user environment. +Instead of specifying the actual location of the volumes, workspaces let you declare the filesystem or parts of the filesystem that you need at runtime. A task or pipeline declares the workspace, and you must give the specific location details of the volume. It is then mounted into that workspace in a task run or a pipeline run. This separation of volume declaration from runtime storage volumes makes the tasks reusable, flexible, and independent of the user environment. With workspaces, you can: @@ -18,16 +22,16 @@ With workspaces, you can: * Use it as a mount point for credentials held in secrets * Use it as a mount point for configurations held in config maps * Use it as a mount point for common tools shared by an organization -* Create a cache of build artifacts that speed up jobs +* Create a cache of build artifacts that accelerate jobs You can specify workspaces in the `TaskRun` or `PipelineRun` using: * A read-only config map or secret * An existing persistent volume claim shared with other tasks -* A persistent volume claim from a provided volume claim template -* An `emptyDir` that is discarded when the task run completes +* A persistent volume claim from a volume claim template +* An `emptyDir` that the system discards when the task run completes -The following example shows a code snippet of the `build-and-deploy` pipeline, which declares a `shared-workspace` workspace for the `build-image` and `apply-manifests` tasks as defined in the pipeline. +The following example shows a code snippet of the `build-and-deploy` pipeline, which declares a `shared-workspace` workspace for the `build-image` and `apply-manifests` tasks you define in the pipeline. [source,yaml] ---- @@ -36,11 +40,11 @@ kind: Pipeline metadata: name: build-and-deploy spec: - workspaces: <1> + workspaces: - name: shared-workspace params: ... - tasks: <2> + tasks: - name: build-image taskRef: resolver: cluster @@ -51,9 +55,9 @@ spec: value: buildah - name: namespace value: openshift-pipelines - workspaces: <3> - - name: source <4> - workspace: shared-workspace <5> + workspaces: + - name: source + workspace: shared-workspace params: - name: TLSVERIFY value: "false" @@ -64,21 +68,27 @@ spec: - name: apply-manifests taskRef: name: apply-manifests - workspaces: <6> + workspaces: - name: source workspace: shared-workspace runAfter: - build-image ... ---- -<1> List of workspaces shared between the tasks defined in the pipeline. A pipeline can define as many workspaces as required. In this example, only one workspace named `shared-workspace` is declared. -<2> Definition of tasks used in the pipeline. This snippet defines two tasks, `build-image` and `apply-manifests`, which share a common workspace. -<3> List of workspaces used in the `build-image` task. A task definition can include as many workspaces as it requires. However, it is recommended that a task uses at most one writable workspace. -<4> Name that uniquely identifies the workspace used in the task. This task uses one workspace named `source`. -<5> Name of the pipeline workspace used by the task. Note that the workspace `source` in turn uses the pipeline workspace named `shared-workspace`. -<6> List of workspaces used in the `apply-manifests` task. Note that this task shares the `source` workspace with the `build-image` task. -Workspaces help tasks share data, and allow you to specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or provide a volume claim template that creates a persistent volume claim for you. +`spec.workspaces`:: List of workspaces shared between the tasks defined in the pipeline. A pipeline can define as many workspaces as you need. In this example, only one workspace named `shared-workspace` is declared. + +`tasks`:: Definition of tasks used in the pipeline. This snippet defines two tasks, `build-image` and `apply-manifests`, which share a common workspace. + +`tasks.workspaces`:: List of workspaces used in the `build-image` task. A task definition can include as many workspaces as it requires. However, we recommend that a task uses at most one writable workspace. + +`tasks.workspaces[0].name`:: Name that uniquely identifies the workspace you use in the task. This task uses one workspace named `source`. + +`tasks.workspaces[0].name.workspace`:: Name of the pipeline workspace used by the task. Note that the workspace `source` in turn uses the pipeline workspace named `shared-workspace`. + +`tasks[1].name.workspaces`:: List of workspaces used in the `apply-manifests` task. Note that this task shares the `source` workspace with the `build-image` task. + +Workspaces help tasks share data, and let you specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or give a volume claim template that creates a persistent volume claim for you. The following code snippet of the `build-deploy-api-pipelinerun` pipeline run uses a volume claim template to create a persistent volume claim for defining the storage volume for the `shared-workspace` workspace used in the `build-and-deploy` pipeline. @@ -94,9 +104,9 @@ spec: params: ... - workspaces: <1> - - name: shared-workspace <2> - volumeClaimTemplate: <3> + workspaces: + - name: shared-workspace + volumeClaimTemplate: spec: accessModes: - ReadWriteOnce @@ -104,6 +114,9 @@ spec: requests: storage: 500Mi ---- -<1> Specifies the list of pipeline workspaces for which volume binding will be provided in the pipeline run. -<2> The name of the workspace in the pipeline for which the volume is being provided. -<3> Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace. + +`workspaces`:: Specifies the list of pipeline workspaces for which volume binding you will offer in the pipeline run. + +`workspaces[0].name`:: The name of the workspace in the pipeline for which the volume is being offered. + +`workspaces[0].name.volumeClaimTemplates`:: Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace.