diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index 3d16c3629cff..92f5751114ff 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -87,7 +87,7 @@ endif::[] :rh-app-icon: image:red-hat-applications-menu-icon.jpg[title="Red Hat applications"] //pipelines :pipelines-title: Red Hat OpenShift Pipelines -:pipelines-shortname: Pipelines +:pipelines-shortname: OpenShift Pipelines :pipelines-ver: pipelines-1.9 :tekton-chains: Tekton Chains :tekton-hub: Tekton Hub diff --git a/cicd/index.adoc b/cicd/index.adoc index b52275e35b4f..e9ed1f0b7af2 100644 --- a/cicd/index.adoc +++ b/cicd/index.adoc @@ -10,7 +10,7 @@ toc::[] {product-title} is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the {product-title} provides the following CI/CD solutions: * OpenShift Builds -* OpenShift Pipelines +* {pipelines-shortname} * OpenShift GitOps [id="openshift-builds"] @@ -26,10 +26,10 @@ OpenShift Builds provides the following extensible support for build strategies: For more information, see xref:../cicd/builds/understanding-image-builds.adoc#understanding-image-builds[Understanding image builds] [id="openshift-pipelines"] -== OpenShift Pipelines -OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. +== {pipelines-shortname} +{pipelines-shortname} provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. -For more information, see xref:../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Understanding OpenShift Pipelines] +For more information, see xref:../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Understanding {pipelines-shortname}] [id="openshift-gitops"] == OpenShift GitOps diff --git a/cicd/jenkins/migrating-from-jenkins-to-openshift-pipelines.adoc b/cicd/jenkins/migrating-from-jenkins-to-openshift-pipelines.adoc index 7e721c90425a..7f8242f2ddbc 100644 --- a/cicd/jenkins/migrating-from-jenkins-to-openshift-pipelines.adoc +++ b/cicd/jenkins/migrating-from-jenkins-to-openshift-pipelines.adoc @@ -1,13 +1,13 @@ :_content-type: ASSEMBLY //Jenkins-Tekton-Migration [id="migrating-from-jenkins-to-openshift-pipelines_{context}"] -= Migrating from Jenkins to OpenShift Pipelines or Tekton += Migrating from Jenkins to {pipelines-shortname} or Tekton include::_attributes/common-attributes.adoc[] :context: migrating-from-jenkins-to-openshift-pipelines toc::[] -You can migrate your CI/CD workflows from Jenkins to xref:../../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Red Hat OpenShift Pipelines], a cloud-native CI/CD experience based on the Tekton project. +You can migrate your CI/CD workflows from Jenkins to xref:../../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[{pipelines-title}], a cloud-native CI/CD experience based on the Tekton project. include::modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc[leveloffset=+1] @@ -23,5 +23,5 @@ include::modules/jt-examples-of-common-use-cases.adoc[leveloffset=+1] [role="_additional-resources"] == Additional resources -* xref:../../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Understanding OpenShift Pipelines] +* xref:../../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Understanding {pipelines-shortname}] * xref:../../authentication/using-rbac.adoc#using-rbac[Role-based Access Control] diff --git a/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc b/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc index 4a4c9c14c3f0..732c493b8662 100644 --- a/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc +++ b/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="creating-applications-with-cicd-pipelines"] -= Creating CI/CD solutions for applications using OpenShift Pipelines += Creating CI/CD solutions for applications using {pipelines-shortname} include::_attributes/common-attributes.adoc[] :context: creating-applications-with-cicd-pipelines @@ -27,8 +27,8 @@ This section uses the `pipelines-tutorial` example to demonstrate the preceding == Prerequisites * You have access to an {product-title} cluster. -* You have installed xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[OpenShift Pipelines] using the {pipelines-title} Operator listed in the OpenShift OperatorHub. After it is installed, it is applicable to the entire cluster. -* You have installed xref:../../cli_reference/tkn_cli/installing-tkn.adoc#installing-tkn[OpenShift Pipelines CLI]. +* You have installed xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[{pipelines-shortname}] using the {pipelines-title} Operator listed in the OpenShift OperatorHub. After it is installed, it is applicable to the entire cluster. +* You have installed xref:../../cli_reference/tkn_cli/installing-tkn.adoc#installing-tkn[{pipelines-shortname} CLI]. * You have forked the front-end link:https://github.com/openshift/pipelines-vote-ui/tree/{pipelines-ver}[`pipelines-vote-ui`] and back-end link:https://github.com/openshift/pipelines-vote-api/tree/{pipelines-ver}[`pipelines-vote-api`] Git repositories using your GitHub ID, and have administrator access to these repositories. * Optional: You have cloned the link:https://github.com/openshift/pipelines-tutorial/tree/{pipelines-ver}[`pipelines-tutorial`] Git repository. @@ -80,7 +80,7 @@ include::modules/op-validating-pull-requests-using-GitHub-interceptors.adoc[leve [id="pipeline-addtl-resources"] == Additional resources -* To include pipelines as code along with the application source code in the same repository, see xref:../../cicd/pipelines/using-pipelines-as-code.adoc#using-pipelines-as-code[Using Pipelines as code]. +* To include {pac} along with the application source code in the same repository, see xref:../../cicd/pipelines/using-pipelines-as-code.adoc#using-pipelines-as-code[Using {pac}]. * For more details on pipelines in the *Developer* perspective, see the xref:../../cicd/pipelines/working-with-pipelines-using-the-developer-perspective.adoc#working-with-pipelines-using-the-developer-perspective[working with pipelines in the *Developer* perspective] section. * To learn more about Security Context Constraints (SCCs), see the xref:../../authentication/managing-security-context-constraints.adoc#managing-pod-security-policies[Managing Security Context Constraints] section. * For more examples of reusable tasks, see the link:https://github.com/openshift/pipelines-catalog[OpenShift Catalog] repository. Additionally, you can also see the Tekton Catalog in the Tekton project. diff --git a/cicd/pipelines/installing-pipelines.adoc b/cicd/pipelines/installing-pipelines.adoc index 7035c3ff48e0..9ac241fd15d4 100644 --- a/cicd/pipelines/installing-pipelines.adoc +++ b/cicd/pipelines/installing-pipelines.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="installing-pipelines"] -= Installing OpenShift Pipelines += Installing {pipelines-shortname} include::_attributes/common-attributes.adoc[] :context: installing-pipelines @@ -15,7 +15,7 @@ This guide walks cluster administrators through the process of installing the {p * You have access to an {product-title} cluster using an account with `cluster-admin` permissions. * You have installed `oc` CLI. -* You have installed xref:../../cli_reference/tkn_cli/installing-tkn.adoc#installing-tkn[OpenShift Pipelines (`tkn`) CLI] on your local system. +* You have installed xref:../../cli_reference/tkn_cli/installing-tkn.adoc#installing-tkn[{pipelines-shortname} (`tkn`) CLI] on your local system. * Your cluster has the xref:../../post_installation_configuration/cluster-capabilities.adoc#cluster-capabilities[Marketplace capability] enabled or the Red Hat Operator catalog source configured manually. ifdef::openshift-origin[] diff --git a/cicd/pipelines/op-release-notes.adoc b/cicd/pipelines/op-release-notes.adoc index 4ef7ca59666b..2a9eba5c1758 100644 --- a/cicd/pipelines/op-release-notes.adoc +++ b/cicd/pipelines/op-release-notes.adoc @@ -16,7 +16,7 @@ toc::[] * Powerful CLI for interacting with pipelines. * Integrated user experience with the *Developer* perspective of the {product-title} web console. -For an overview of {pipelines-title}, see xref:../../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Understanding OpenShift Pipelines]. +For an overview of {pipelines-title}, see xref:../../cicd/pipelines/understanding-openshift-pipelines.adoc#understanding-openshift-pipelines[Understanding {pipelines-shortname}]. include::modules/op-tkn-pipelines-compatibility-support-matrix.adoc[leveloffset=+1] diff --git a/cicd/pipelines/reducing-pipelines-resource-consumption.adoc b/cicd/pipelines/reducing-pipelines-resource-consumption.adoc index 239f008d36cf..764f7790ac1a 100644 --- a/cicd/pipelines/reducing-pipelines-resource-consumption.adoc +++ b/cicd/pipelines/reducing-pipelines-resource-consumption.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="reducing-pipelines-resource-consumption"] -= Reducing resource consumption of OpenShift Pipelines += Reducing resource consumption of {pipelines-shortname} include::_attributes/common-attributes.adoc[] :context: reducing-pipelines-resource-consumption @@ -23,7 +23,7 @@ include::modules/op-mitigating-extra-pipeline-resource-consumption.adoc[leveloff [id="additional-resources_reducing-pipelines-resource-consumption"] == Additional resources -* xref:../../cicd/pipelines/setting-compute-resource-quota-for-openshift-pipelines.adoc#setting-compute-resource-quota-for-openshift-pipelines[Setting compute resource quota for OpenShift Pipelines] +* xref:../../cicd/pipelines/setting-compute-resource-quota-for-openshift-pipelines.adoc#setting-compute-resource-quota-for-openshift-pipelines[Setting compute resource quota for {pipelines-shortname}] * xref:../../applications/quotas/quotas-setting-per-project.adoc#quotas-setting-per-project[Resource quotas per project] * xref:../../nodes/clusters/nodes-cluster-limit-ranges.adoc#nodes-cluster-limit-ranges[Restricting resource consumption using limit ranges] * link:https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#resources[Resource requests and limits in Kubernetes] diff --git a/cicd/pipelines/setting-compute-resource-quota-for-openshift-pipelines.adoc b/cicd/pipelines/setting-compute-resource-quota-for-openshift-pipelines.adoc index 7228f2fc53c2..1210c170cc4d 100644 --- a/cicd/pipelines/setting-compute-resource-quota-for-openshift-pipelines.adoc +++ b/cicd/pipelines/setting-compute-resource-quota-for-openshift-pipelines.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="setting-compute-resource-quota-for-openshift-pipelines"] -= Setting compute resource quota for OpenShift Pipelines += Setting compute resource quota for {pipelines-shortname} include::_attributes/common-attributes.adoc[] :context: setting-compute-resource-quota-for-openshift-pipelines diff --git a/cicd/pipelines/understanding-openshift-pipelines.adoc b/cicd/pipelines/understanding-openshift-pipelines.adoc index 147585c5a43d..b6853196b2c2 100644 --- a/cicd/pipelines/understanding-openshift-pipelines.adoc +++ b/cicd/pipelines/understanding-openshift-pipelines.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="understanding-openshift-pipelines"] -= Understanding OpenShift Pipelines += Understanding {pipelines-shortname} include::_attributes/common-attributes.adoc[] :context: understanding-openshift-pipelines @@ -20,7 +20,7 @@ toc::[] * You can use the {product-title} Developer console to create Tekton resources, view logs of pipeline runs, and manage pipelines in your {product-title} namespaces. [id="op-detailed-concepts"] -== OpenShift Pipeline Concepts +== {pipelines-shortname} Concepts This guide provides a detailed view of the various pipeline concepts. //About tasks @@ -44,7 +44,7 @@ include::modules/op-about-triggers.adoc[leveloffset=+2] [role="_additional-resources"] == Additional resources -* For information on installing pipelines, see xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing OpenShift Pipelines]. -* For more details on creating custom CI/CD solutions, see xref:../../cicd/pipelines/creating-applications-with-cicd-pipelines.adoc#creating-applications-with-cicd-pipelines[Creating applications with CI/CD Pipelines]. +* For information on installing {pipelines-shortname}, see xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing {pipelines-shortname}]. +* For more details on creating custom CI/CD solutions, see xref:../../cicd/pipelines/creating-applications-with-cicd-pipelines.adoc#creating-applications-with-cicd-pipelines[Creating CI/CD solutions for applications using {pipelines-shortname}]. * For more details on re-encrypt TLS termination, see link:https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#re-encryption-termination[Re-encryption Termination]. * For more details on secured routes, see the xref:../../networking/routes/secured-routes.adoc#secured-routes[Secured routes] section. diff --git a/cicd/pipelines/uninstalling-pipelines.adoc b/cicd/pipelines/uninstalling-pipelines.adoc index afa36d9dd2a4..c3489bd15c38 100644 --- a/cicd/pipelines/uninstalling-pipelines.adoc +++ b/cicd/pipelines/uninstalling-pipelines.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="uninstalling-pipelines"] -= Uninstalling OpenShift Pipelines += Uninstalling {pipelines-shortname} include::_attributes/common-attributes.adoc[] :context: uninstalling-pipelines diff --git a/cicd/pipelines/using-pipelines-as-code.adoc b/cicd/pipelines/using-pipelines-as-code.adoc index f627e02f8f32..92cc18c27063 100644 --- a/cicd/pipelines/using-pipelines-as-code.adoc +++ b/cicd/pipelines/using-pipelines-as-code.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="using-pipelines-as-code"] -= Using Pipelines as Code += Using {pac} include::_attributes/common-attributes.adoc[] :context: using-pipelines-as-code @@ -131,7 +131,7 @@ include::modules/op-pipelines-as-code-command-reference.adoc[leveloffset=+1] * link:https://github.com/openshift-pipelines/pipelines-as-code/tree/main/.tekton[An example of the `.tekton/` directory in the Pipelines as Code repository] -* xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing OpenShift Pipelines] +* xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing {pipelines-shortname}] * xref:../../cli_reference/tkn_cli/installing-tkn.adoc#installing-tkn[Installing tkn] diff --git a/cicd/pipelines/using-pods-in-a-privileged-security-context.adoc b/cicd/pipelines/using-pods-in-a-privileged-security-context.adoc index 13544851dfda..323b4b627d52 100644 --- a/cicd/pipelines/using-pods-in-a-privileged-security-context.adoc +++ b/cicd/pipelines/using-pods-in-a-privileged-security-context.adoc @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -The default configuration of OpenShift Pipelines 1.3.x and later versions does not allow you to run pods with privileged security context, if the pods result from pipeline run or task run. +The default configuration of {pipelines-shortname} 1.3.x and later versions does not allow you to run pods with privileged security context, if the pods result from pipeline run or task run. For such pods, the default service account is `pipeline`, and the security context constraint (SCC) associated with the `pipeline` service account is `pipelines-scc`. The `pipelines-scc` SCC is similar to the `anyuid` SCC, but with minor differences as defined in the YAML file for the SCC of pipelines: .Example `pipelines-scc.yaml` snippet @@ -23,7 +23,7 @@ fsGroup: ... ---- -In addition, the `Buildah` cluster task, shipped as part of the OpenShift Pipelines, uses `vfs` as the default storage driver. +In addition, the `Buildah` cluster task, shipped as part of the {pipelines-shortname}, uses `vfs` as the default storage driver. include::modules/op-running-pipeline-and-task-run-pods-with-privileged-security-context.adoc[leveloffset=+1] diff --git a/cicd/pipelines/using-tekton-chains-for-openshift-pipelines-supply-chain-security.adoc b/cicd/pipelines/using-tekton-chains-for-openshift-pipelines-supply-chain-security.adoc index 50cddac5efd3..7719d736c8d1 100644 --- a/cicd/pipelines/using-tekton-chains-for-openshift-pipelines-supply-chain-security.adoc +++ b/cicd/pipelines/using-tekton-chains-for-openshift-pipelines-supply-chain-security.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="using-tekton-chains-for-openshift-pipelines-supply-chain-security"] -= Using Tekton Chains for OpenShift Pipelines supply chain security += Using Tekton Chains for {pipelines-shortname} supply chain security include::_attributes/common-attributes.adoc[] :context: using-tekton-chains-for-openshift-pipelines-supply-chain-security @@ -45,5 +45,5 @@ include::modules/op-using-tekton-chains-to-sign-and-verify-image-and-provenance. [id="additional-resources-tekton-chains"] == Additional resources -* xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing OpenShift Pipelines] +* xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing {pipelines-shortname}] diff --git a/cicd/pipelines/using-tekton-hub-with-openshift-pipelines.adoc b/cicd/pipelines/using-tekton-hub-with-openshift-pipelines.adoc index 8d029ce044b2..1c958a3a9350 100644 --- a/cicd/pipelines/using-tekton-hub-with-openshift-pipelines.adoc +++ b/cicd/pipelines/using-tekton-hub-with-openshift-pipelines.adoc @@ -1,6 +1,6 @@ :_content-type: ASSEMBLY [id="using-tekton-hub-with-openshift-pipelines"] -= Using Tekton Hub with OpenShift Pipelines += Using Tekton Hub with {pipelines-shortname} include::_attributes/common-attributes.adoc[] :context: using-tekton-hub-with-openshift-pipelines @@ -32,6 +32,6 @@ include::modules/op-opting-out-of-tekton-hub-in-the-developer-perspective.adoc[l * GitHub repository of link:https://github.com/tektoncd/hub[Tekton Hub] -* xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing OpenShift Pipelines] +* xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[Installing {pipelines-shortname}] * xref:../../cicd/pipelines/op-release-notes.adoc#op-release-notes[{pipelines-title} release notes] \ No newline at end of file diff --git a/cicd/pipelines/working-with-pipelines-using-the-developer-perspective.adoc b/cicd/pipelines/working-with-pipelines-using-the-developer-perspective.adoc index ade6934f8d4f..f81e5c6f689b 100644 --- a/cicd/pipelines/working-with-pipelines-using-the-developer-perspective.adoc +++ b/cicd/pipelines/working-with-pipelines-using-the-developer-perspective.adoc @@ -20,20 +20,14 @@ After you create the pipelines for your application, you can view and visually i == Prerequisites * You have access to an {product-title} cluster and have switched to xref:../../web_console/web-console-overview.adoc#about-developer-perspective_web-console-overview[the *Developer* perspective]. -* You have the xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[OpenShift Pipelines Operator installed] in your cluster. +* You have the xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[{pipelines-shortname} Operator installed] in your cluster. * You are a cluster administrator or a user with create and edit permissions. * You have created a project. include::modules/op-constructing-pipelines-using-pipeline-builder.adoc[leveloffset=+1] -== Creating OpenShift Pipelines along with applications - -To create pipelines along with applications, use the *From Git* option in the *Add+* view of the *Developer* perspective. You can view all of your available pipelines and select the pipelines you want to use to create applications while importing your code or deploying an image. - -The Tekton Hub Integration is enabled by default and you can see tasks from the Tekton Hub that are supported by your cluster. Administrators can opt out of the Tekton Hub Integration and the Tekton Hub tasks will no longer be displayed. You can also check whether a webhook URL exists for a generated pipeline. Default webhooks are added for the pipelines that are created using the *+Add* flow and the URL is visible in the side panel of the selected resources in the Topology view. - -For more information, see xref:../../applications/creating_applications/odc-creating-applications-using-developer-perspective.adoc#odc-importing-codebase-from-git-to-create-application_odc-creating-applications-using-developer-perspective[Creating applications using the Developer perspective]. +include::modules/op-creating-pipelines-along-with-applications.adoc[leveloffset=+1] include::modules/odc-adding-a-GitHub-repository-containing-pipelines.adoc[leveloffset=+1] @@ -55,4 +49,4 @@ include::modules/op-deleting-pipelines.adoc[leveloffset=+1] [id="additional-resources-working-with-pipelines-using-the-developer-perspective"] == Additional resources -* xref:../../cicd/pipelines/using-tekton-hub-with-openshift-pipelines.adoc#using-tekton-hub-with-openshift-pipelines[Using Tekton Hub with OpenShift Pipelines] +* xref:../../cicd/pipelines/using-tekton-hub-with-openshift-pipelines.adoc#using-tekton-hub-with-openshift-pipelines[Using Tekton Hub with {pipelines-shortname}] diff --git a/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc b/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc index 709e4a384a63..2a4606905b78 100644 --- a/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc +++ b/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc @@ -4,9 +4,9 @@ :_content-type: CONCEPT [id="jt-comparison-of-jenkins-and-openshift-pipelines-concepts_{context}"] -= Comparison of Jenkins and OpenShift Pipelines concepts += Comparison of Jenkins and {pipelines-shortname} concepts -You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines. +You can review and compare the following equivalent terms used in Jenkins and {pipelines-shortname}. == Jenkins terminology Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows: @@ -16,8 +16,8 @@ Jenkins offers declarative and scripted pipelines that are extensible using shar * *Stage*: A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks. * **Step**: A single task that specifies the exact action to be taken, either by using a command or a script. -== OpenShift Pipelines terminology -OpenShift Pipelines uses link:https://yaml.org/[YAML] syntax for declarative pipelines and consists of tasks. Some basic terms in OpenShift Pipelines are as follows: +== {pipelines-shortname} terminology +{pipelines-shortname} uses link:https://yaml.org/[YAML] syntax for declarative pipelines and consists of tasks. Some basic terms in {pipelines-shortname} are as follows: * **Pipeline**: A set of tasks in a series, in parallel, or both. * **Task**: A sequence of steps as commands, binaries, or scripts. @@ -28,7 +28,7 @@ OpenShift Pipelines uses link:https://yaml.org/[YAML] syntax for declarative pip ==== You can initiate a PipelineRun or a TaskRun with a set of inputs such as parameters and workspaces, and the execution results in a set of outputs and artifacts. ==== -* **Workspace**: In OpenShift Pipelines, workspaces are conceptual blocks that serve the following purposes: +* **Workspace**: In {pipelines-shortname}, workspaces are conceptual blocks that serve the following purposes: ** Storage of inputs, outputs, and build artifacts. @@ -39,16 +39,16 @@ You can initiate a PipelineRun or a TaskRun with a set of inputs such as paramet + [NOTE] ==== -In Jenkins, there is no direct equivalent of OpenShift Pipelines workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. When a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the control node maintains the build history. +In Jenkins, there is no direct equivalent of {pipelines-shortname} workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. When a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the control node maintains the build history. ==== == Mapping of concepts -The building blocks of Jenkins and OpenShift Pipelines are not equivalent, and a specific comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and OpenShift Pipelines correlate in general: +The building blocks of Jenkins and {pipelines-shortname} are not equivalent, and a specific comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and {pipelines-shortname} correlate in general: -.Jenkins and OpenShift Pipelines - basic comparison +.Jenkins and {pipelines-shortname} - basic comparison [cols="1,1",options="header"] |=== -|Jenkins|OpenShift Pipelines +|Jenkins|{pipelines-shortname} |Pipeline|Pipeline and PipelineRun |Stage|Task |Step|A step in a task diff --git a/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc b/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc index 504acca04233..d1641e88d9b2 100644 --- a/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc +++ b/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc @@ -4,15 +4,15 @@ :_content-type: CONCEPT [id="jt-comparison-of-jenkins-openshift-pipelines-execution-models_{context}"] -= Comparison of Jenkins and OpenShift Pipelines execution models += Comparison of Jenkins and {pipelines-shortname} execution models -Jenkins and OpenShift Pipelines offer similar functions but are different in architecture and execution. +Jenkins and {pipelines-shortname} offer similar functions but are different in architecture and execution. -.Comparison of execution models in Jenkins and OpenShift Pipelines +.Comparison of execution models in Jenkins and {pipelines-shortname} [cols="1,1",options="header"] |=== -|Jenkins|OpenShift Pipelines -|Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes.|OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution. -|Containers are launched by the Jenkins controller node through the pipeline.|OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins). +|Jenkins|{pipelines-shortname} +|Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes.|{pipelines-shortname} is serverless and distributed, and there is no central dependency for execution. +|Containers are launched by the Jenkins controller node through the pipeline.|{pipelines-shortname} adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins). |Extensibility is achieved by using plugins.|Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts. |=== diff --git a/modules/jt-examples-of-common-use-cases.adoc b/modules/jt-examples-of-common-use-cases.adoc index 63a3b2eeac9d..5bfd8bb7d177 100644 --- a/modules/jt-examples-of-common-use-cases.adoc +++ b/modules/jt-examples-of-common-use-cases.adoc @@ -6,15 +6,15 @@ [id="jt-examples-of-common-use-cases_{context}"] = Examples of common use cases -Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as: +Both Jenkins and {pipelines-shortname} offer capabilities for common CI/CD use cases, such as: * Compiling, building, and deploying images using Apache Maven * Extending the core capabilities by using plugins * Reusing shareable libraries and custom scripts -== Running a Maven pipeline in Jenkins and OpenShift Pipelines +== Running a Maven pipeline in Jenkins and {pipelines-shortname} -You can use Maven in both Jenkins and OpenShift Pipelines workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to OpenShift Pipelines, consider the following examples: +You can use Maven in both Jenkins and {pipelines-shortname} workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to {pipelines-shortname}, consider the following examples: .Example: Compile and build an image and deploy it to OpenShift using Maven in Jenkins [source,groovy] @@ -50,7 +50,7 @@ node('maven') { ---- -.Example: Compile and build an image and deploy it to OpenShift using Maven in OpenShift Pipelines. +.Example: Compile and build an image and deploy it to OpenShift using Maven in {pipelines-shortname}. [source,yaml] ---- apiVersion: tekton.dev/v1beta1 @@ -151,14 +151,14 @@ spec: ---- -== Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins +== Extending the core capabilities of Jenkins and {pipelines-shortname} by using plugins Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the link:https://plugins.jenkins.io/[Jenkins Plugin Index]. -OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the link:https://hub.tekton.dev/[Tekton Hub]. +{pipelines-shortname} also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable {pipelines-shortname} tasks are available in the link:https://hub.tekton.dev/[Tekton Hub]. -In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the link:https://plugins.jenkins.io/role-strategy/[Role-based Authorization Strategy] plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system. +In addition, {pipelines-shortname} incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and {pipelines-shortname}. While Jenkins ensures authorization using the link:https://plugins.jenkins.io/role-strategy/[Role-based Authorization Strategy] plugin, {pipelines-shortname} uses OpenShift's built-in Role-based Access Control system. -== Sharing reusable code in Jenkins and OpenShift Pipelines +== Sharing reusable code in Jenkins and {pipelines-shortname} Jenkins link:https://www.jenkins.io/doc/book/pipeline/shared-libraries/[shared libraries] provide reusable code for parts of Jenkins pipelines. The libraries are shared between link:https://www.jenkins.io/doc/book/pipeline/jenkinsfile/[Jenkinsfiles] to create highly modular pipelines without code repetition. -Although there is no direct equivalent of Jenkins shared libraries in OpenShift Pipelines, you can achieve similar workflows by using tasks from the link:https://hub.tekton.dev/[Tekton Hub] in combination with custom tasks and scripts. +Although there is no direct equivalent of Jenkins shared libraries in {pipelines-shortname}, you can achieve similar workflows by using tasks from the link:https://hub.tekton.dev/[Tekton Hub] in combination with custom tasks and scripts. diff --git a/modules/jt-extending-openshift-pipelines-capabilities-using-custom-tasks-and-scripts.adoc b/modules/jt-extending-openshift-pipelines-capabilities-using-custom-tasks-and-scripts.adoc index 288680d52895..2b9d69b55103 100644 --- a/modules/jt-extending-openshift-pipelines-capabilities-using-custom-tasks-and-scripts.adoc +++ b/modules/jt-extending-openshift-pipelines-capabilities-using-custom-tasks-and-scripts.adoc @@ -4,9 +4,9 @@ :_content-type: PROCEDURE [id="jt-extending-openshift-pipelines-capabilities-using-custom-tasks-and-scripts_{context}"] -= Extending OpenShift Pipelines capabilities using custom tasks and scripts += Extending {pipelines-shortname} capabilities using custom tasks and scripts -In OpenShift Pipelines, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend the capabilities of OpenShift Pipelines. +In {pipelines-shortname}, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend the capabilities of {pipelines-shortname}. .Example: A custom task for running the `maven test` command [source,yaml,subs="attributes+"] diff --git a/modules/jt-migrating-a-sample-pipeline-from-jenkins-to-openshift-pipelines.adoc b/modules/jt-migrating-a-sample-pipeline-from-jenkins-to-openshift-pipelines.adoc index ab60d774089a..a998bda2f4dc 100644 --- a/modules/jt-migrating-a-sample-pipeline-from-jenkins-to-openshift-pipelines.adoc +++ b/modules/jt-migrating-a-sample-pipeline-from-jenkins-to-openshift-pipelines.adoc @@ -4,9 +4,9 @@ :_content-type: PROCEDURE [id="jt-migrating-a-sample-pipeline-from-jenkins-to-openshift-pipelines_{context}"] -= Migrating a sample pipeline from Jenkins to OpenShift Pipelines += Migrating a sample pipeline from Jenkins to {pipelines-shortname} -You can use the following equivalent examples to help migrate your build, test, and deploy pipelines from Jenkins to OpenShift Pipelines. +You can use the following equivalent examples to help migrate your build, test, and deploy pipelines from Jenkins to {pipelines-shortname}. == Jenkins pipeline Consider a Jenkins pipeline written in Groovy for building, testing, and deploying: @@ -35,9 +35,9 @@ pipeline { } ---- -== OpenShift Pipelines pipeline +== {pipelines-shortname} pipeline -To create a pipeline in OpenShift Pipelines that is equivalent to the preceding Jenkins pipeline, you create the following three tasks: +To create a pipeline in {pipelines-shortname} that is equivalent to the preceding Jenkins pipeline, you create the following three tasks: .Example `build` task YAML definition file [source,yaml,subs="attributes+"] @@ -92,9 +92,9 @@ spec: workingDir: $(workspaces.source.path) ---- -You can combine the three tasks sequentially to form a pipeline in OpenShift Pipelines: +You can combine the three tasks sequentially to form a pipeline in {pipelines-shortname}: -.Example: OpenShift Pipelines pipeline for building, testing, and deployment +.Example: {pipelines-shortname} pipeline for building, testing, and deployment [source,yaml,subs="attributes+"] ---- apiVersion: tekton.dev/v1beta1 diff --git a/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc b/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc index d7e63e938ece..ac78bcabe8cc 100644 --- a/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc +++ b/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc @@ -7,7 +7,7 @@ [id="jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks_{context}"] = Migrating from Jenkins plugins to Tekton Hub tasks -You can extend the capability of Jenkins by using link:https://plugins.jenkinsci.org[plugins]. To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from link:https://hub.tekton.dev[Tekton Hub]. +You can extend the capability of Jenkins by using link:https://plugins.jenkinsci.org[plugins]. To achieve similar extensibility in {pipelines-shortname}, use any of the tasks available from link:https://hub.tekton.dev[Tekton Hub]. For example, consider the link:https://hub.tekton.dev/tekton/task/git-clone[git-clone] task in Tekton Hub, which corresponds to the link:https://plugins.jenkins.io/git/[git plugin] for Jenkins. diff --git a/modules/odc-adding-a-GitHub-repository-containing-pipelines.adoc b/modules/odc-adding-a-GitHub-repository-containing-pipelines.adoc index ae745c851d9b..06b183484aab 100644 --- a/modules/odc-adding-a-GitHub-repository-containing-pipelines.adoc +++ b/modules/odc-adding-a-GitHub-repository-containing-pipelines.adoc @@ -16,7 +16,7 @@ You can add both public and private GitHub repositories. .Procedure . In the developer perspective, choose the namespace or project in which you want to add your GitHub repository. . Navigate to *Pipelines* using the left navigation pane. -. Click *Create* -> *Repository* on the right side of the Pipelines page. +. Click *Create* -> *Repository* on the right side of the *Pipelines* page. . Enter your *Git Repo URL* and the console automatically fetches the repository name. . Click *Show configuration options*. By default, you see only one option *Setup a webhook*. If you have a GitHub application configured, you see two options: * *Use GitHub App*: Select this option to install your GitHub application in your repository. diff --git a/modules/op-about-finally_tasks.adoc b/modules/op-about-finally_tasks.adoc index dfb228c158dc..badfe35581ce 100644 --- a/modules/op-about-finally_tasks.adoc +++ b/modules/op-about-finally_tasks.adoc @@ -54,11 +54,11 @@ spec: exit 1 fi ---- -<1> Unique name of the Pipeline. +<1> Unique name of the pipeline. <2> The shared workspace where the git repository is cloned. <3> The task to clone the application repository to the shared workspace. <4> The task to clean-up the shared workspace. -<5> A reference to the task that is to be executed in the TaskRun. -<6> A shared storage volume that a Task in a Pipeline needs at runtime to receive input or provide output. +<5> A reference to the task that is to be executed in the task run. +<6> A shared storage volume that a task in a pipeline needs at runtime to receive input or provide output. <7> A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value. <8> Embedded task definition. diff --git a/modules/op-about-pipelinerun.adoc b/modules/op-about-pipelinerun.adoc index 06769abe193d..0327b3092648 100644 --- a/modules/op-about-pipelinerun.adoc +++ b/modules/op-about-pipelinerun.adoc @@ -7,7 +7,7 @@ A `PipelineRun` is a type of resource that binds a pipeline, workspaces, credentials, and a set of parameter values specific to a scenario to run the CI/CD workflow. -A _pipeline run_ is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a task run for each task in the pipeline run. +A `PipelineRun` is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a task run for each task in the pipeline run. The pipeline runs the tasks sequentially until they are complete or a task fails. The `status` field tracks and the progress of each task run and stores it for monitoring and auditing purposes. diff --git a/modules/op-about-pipelines.adoc b/modules/op-about-pipelines.adoc index 48fe4da75504..299edc7eef8d 100644 --- a/modules/op-about-pipelines.adoc +++ b/modules/op-about-pipelines.adoc @@ -5,7 +5,7 @@ [id="about-pipelines_{context}"] = Pipelines -A _Pipeline_ is a collection of `Task` resources arranged in a specific order of execution. They are executed to construct complex workflows that automate the build, deployment and delivery of applications. You can define a CI/CD workflow for your application using pipelines containing one or more tasks. +A `Pipeline` is a collection of `Task` resources arranged in a specific order of execution. They are executed to construct complex workflows that automate the build, deployment and delivery of applications. You can define a CI/CD workflow for your application using pipelines containing one or more tasks. A `Pipeline` resource definition consists of a number of fields or attributes, which together enable the pipeline to accomplish a specific goal. Each `Pipeline` resource definition must contain at least one `Task` resource, which ingests specific inputs and produces specific outputs. The pipeline definition can also optionally include _Conditions_, _Workspaces_, _Parameters_, or _Resources_ depending on the application requirements. @@ -89,14 +89,14 @@ spec: <4> ---- <1> Pipeline API version `v1beta1`. <2> Specifies the type of Kubernetes object. In this example, `Pipeline`. -<3> Unique name of this Pipeline. -<4> Specifies the definition and structure of the Pipeline. -<5> Workspaces used across all the Tasks in the Pipeline. -<6> Parameters used across all the Tasks in the Pipeline. -<7> Specifies the list of Tasks used in the Pipeline. -<8> Task `build-image`, which uses the `buildah` ClusterTask to build application images from a given Git repository. -<9> Task `apply-manifests`, which uses a user-defined Task with the same name. -<10> Specifies the sequence in which Tasks are run in a Pipeline. In this example, the `apply-manifests` Task is run only after the `build-image` Task is completed. +<3> Unique name of this pipeline. +<4> Specifies the definition and structure of the pipeline. +<5> Workspaces used across all the tasks in the pipeline. +<6> Parameters used across all the tasks in the pipeline. +<7> Specifies the list of tasks used in the pipeline. +<8> Task `build-image`, which uses the `buildah` `ClusterTask` to build application images from a given Git repository. +<9> Task `apply-manifests`, which uses a user-defined task with the same name. +<10> Specifies the sequence in which tasks are run in a pipeline. In this example, the `apply-manifests` task is run only after the `build-image` task is completed. [NOTE] ==== diff --git a/modules/op-about-taskrun.adoc b/modules/op-about-taskrun.adoc index 1c8794cdd8f2..9e1fc9d98394 100644 --- a/modules/op-about-taskrun.adoc +++ b/modules/op-about-taskrun.adoc @@ -5,11 +5,11 @@ [id="about-taskrun_{context}"] = TaskRun -A _TaskRun_ instantiates a Task for execution with specific inputs, outputs, and execution parameters on a cluster. It can be invoked on its own or as part of a PipelineRun for each Task in a pipeline. +A `TaskRun` instantiates a task for execution with specific inputs, outputs, and execution parameters on a cluster. It can be invoked on its own or as part of a pipeline run for each task in a pipeline. -A Task consists of one or more Steps that execute container images, and each container image performs a specific piece of build work. A TaskRun executes the Steps in a Task in the specified order, until all Steps execute successfully or a failure occurs. A TaskRun is automatically created by a PipelineRun for each Task in a Pipeline. +A task consists of one or more steps that execute container images, and each container image performs a specific piece of build work. A task run executes the steps in a task in the specified order, until all steps execute successfully or a failure occurs. A `TaskRun` is automatically created by a `PipelineRun` for each task in a pipeline. -The following example shows a TaskRun that runs the `apply-manifests` Task with the relevant input parameters: +The following example shows a task run that runs the `apply-manifests` task with the relevant input parameters: [source,yaml] ---- apiVersion: tekton.dev/v1beta1 <1> @@ -26,9 +26,9 @@ spec: <4> persistentVolumeClaim: claimName: source-pvc ---- -<1> TaskRun API version `v1beta1`. +<1> The task run API version `v1beta1`. <2> Specifies the type of Kubernetes object. In this example, `TaskRun`. -<3> Unique name to identify this TaskRun. -<4> Definition of the TaskRun. For this TaskRun, the Task and the required workspace are specified. -<5> Name of the Task reference used for this TaskRun. This TaskRun executes the `apply-manifests` Task. -<6> Workspace used by the TaskRun. +<3> Unique name to identify this task run. +<4> Definition of the task run. For this task run, the task and the required workspace are specified. +<5> Name of the task reference used for this task run. This task run executes the `apply-manifests` task. +<6> Workspace used by the task run. diff --git a/modules/op-about-tasks.adoc b/modules/op-about-tasks.adoc index 840c09c67e4a..8632dc6c38ec 100644 --- a/modules/op-about-tasks.adoc +++ b/modules/op-about-tasks.adoc @@ -5,7 +5,7 @@ [id="about-tasks_{context}"] = Tasks -_Tasks_ are the building blocks of a pipeline and consists of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. Tasks are reusable and can be used in multiple Pipelines. +`Task` resources are the building blocks of a pipeline and consist of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. Tasks are reusable and can be used in multiple pipelines. _Steps_ are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs as a container within that pod. Because steps run within the same pod, they can access the same volumes for caching files, config maps, and secrets. diff --git a/modules/op-about-whenexpression.adoc b/modules/op-about-whenexpression.adoc index 77abbbf36cb4..c4191803c06f 100644 --- a/modules/op-about-whenexpression.adoc +++ b/modules/op-about-whenexpression.adoc @@ -141,7 +141,7 @@ spec: storage: 16Mi ---- <1> Specifies the type of Kubernetes object. In this example, `PipelineRun`. -<2> Task `create-file` used in the Pipeline. +<2> Task `create-file` used in the pipeline. <3> `when` expression that specifies to execute the `echo-file-exists` task only if the `exists` result from the `check-file` task is `yes`. <4> `when` expression that specifies to skip the `task-should-be-skipped-1` task only if the `path` parameter is `README.md`. <5> `when` expression that specifies to execute the `finally-task-should-be-executed` task only if the execution status of the `echo-file-exists` task and the task status is `Succeeded`, the `exists` result from the `check-file` task is `yes`, and the `path` parameter is `README.md`. diff --git a/modules/op-about-workspace.adoc b/modules/op-about-workspace.adoc index fcc5cceef34b..679aa537ac96 100644 --- a/modules/op-about-workspace.adoc +++ b/modules/op-about-workspace.adoc @@ -7,28 +7,28 @@ [NOTE] ==== -It is recommended that you use Workspaces instead of PipelineResources in OpenShift Pipelines, as PipelineResources are difficult to debug, limited in scope, and make Tasks less reusable. +It is recommended that you use workspaces instead of the `PipelineResource` CRs in {pipelines-title}, as `PipelineResource` CRs are difficult to debug, limited in scope, and make tasks less reusable. ==== -Workspaces declare shared storage volumes that a Task in a Pipeline needs at runtime to receive input or provide output. Instead of specifying the actual location of the volumes, Workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. A Task or Pipeline declares the Workspace and you must provide the specific location details of the volume. It is then mounted into that Workspace in a TaskRun or a PipelineRun. This separation of volume declaration from runtime storage volumes makes the Tasks reusable, flexible, and independent of the user environment. +Workspaces declare shared storage volumes that a task in a pipeline needs at runtime to receive input or provide output. Instead of specifying the actual location of the volumes, workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. A task or pipeline declares the workspace and you must provide the specific location details of the volume. It is then mounted into that workspace in a task run or a pipeline run. This separation of volume declaration from runtime storage volumes makes the tasks reusable, flexible, and independent of the user environment. -With Workspaces, you can: +With workspaces, you can: -* Store Task inputs and outputs -* Share data among Tasks -* Use it as a mount point for credentials held in Secrets -* Use it as a mount point for configurations held in ConfigMaps +* Store task inputs and outputs +* Share data among tasks +* Use it as a mount point for credentials held in secrets +* Use it as a mount point for configurations held in config maps * Use it as a mount point for common tools shared by an organization * Create a cache of build artifacts that speed up jobs -You can specify Workspaces in the TaskRun or PipelineRun using: +You can specify workspaces in the `TaskRun` or `PipelineRun` using: -* A read-only ConfigMaps or Secret -* An existing PersistentVolumeClaim shared with other Tasks -* A PersistentVolumeClaim from a provided VolumeClaimTemplate -* An emptyDir that is discarded when the TaskRun completes +* A read-only config map or secret +* An existing persistent volume claim shared with other tasks +* A persistent volume claim from a provided volume claim template +* An `emptyDir` that is discarded when the task run completes -The following example shows a code snippet of the `build-and-deploy` Pipeline, which declares a `shared-workspace` Workspace for the `build-image` and `apply-manifests` Tasks as defined in the Pipeline. +The following example shows a code snippet of the `build-and-deploy` pipeline, which declares a `shared-workspace` workspace for the `build-image` and `apply-manifests` tasks as defined in the pipeline. [source,yaml] ---- @@ -66,16 +66,16 @@ spec: - build-image ... ---- -<1> List of Workspaces shared between the Tasks defined in the Pipeline. A Pipeline can define as many Workspaces as required. In this example, only one Workspace named `shared-workspace` is declared. -<2> Definition of Tasks used in the Pipeline. This snippet defines two Tasks, `build-image` and `apply-manifests`, which share a common Workspace. -<3> List of Workspaces used in the `build-image` Task. A Task definition can include as many Workspaces as it requires. However, it is recommended that a Task uses at most one writable Workspace. -<4> Name that uniquely identifies the Workspace used in the Task. This Task uses one Workspace named `source`. -<5> Name of the Pipeline Workspace used by the Task. Note that the Workspace `source` in turn uses the Pipeline Workspace named `shared-workspace`. -<6> List of Workspaces used in the `apply-manifests` Task. Note that this Task shares the `source` Workspace with the `build-image` Task. +<1> List of workspaces shared between the tasks defined in the pipeline. A pipeline can define as many workspaces as required. In this example, only one workspace named `shared-workspace` is declared. +<2> Definition of tasks used in the pipeline. This snippet defines two tasks, `build-image` and `apply-manifests`, which share a common workspace. +<3> List of workspaces used in the `build-image` task. A task definition can include as many workspaces as it requires. However, it is recommended that a task uses at most one writable workspace. +<4> Name that uniquely identifies the workspace used in the task. This task uses one workspace named `source`. +<5> Name of the pipeline workspace used by the task. Note that the workspace `source` in turn uses the pipeline workspace named `shared-workspace`. +<6> List of workspaces used in the `apply-manifests` task. Note that this task shares the `source` workspace with the `build-image` task. Workspaces help tasks share data, and allow you to specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or provide a volume claim template that creates a persistent volume claim for you. -The following code snippet of the `build-deploy-api-pipelinerun` PipelineRun uses a volume claim template to create a persistent volume claim for defining the storage volume for the `shared-workspace` Workspace used in the `build-and-deploy` Pipeline. +The following code snippet of the `build-deploy-api-pipelinerun` pipeline run uses a volume claim template to create a persistent volume claim for defining the storage volume for the `shared-workspace` workspace used in the `build-and-deploy` pipeline. [source,yaml] ---- @@ -99,6 +99,6 @@ spec: requests: storage: 500Mi ---- -<1> Specifies the list of Pipeline Workspaces for which volume binding will be provided in the PipelineRun. -<2> The name of the Workspace in the Pipeline for which the volume is being provided. +<1> Specifies the list of pipeline workspaces for which volume binding will be provided in the pipeline run. +<2> The name of the workspace in the pipeline for which the volume is being provided. <3> Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace. diff --git a/modules/op-alternative-approaches-compute-resource-quota-pipelines.adoc b/modules/op-alternative-approaches-compute-resource-quota-pipelines.adoc index 0615f421d509..260b77998b04 100644 --- a/modules/op-alternative-approaches-compute-resource-quota-pipelines.adoc +++ b/modules/op-alternative-approaches-compute-resource-quota-pipelines.adoc @@ -4,7 +4,7 @@ [id="alternative-approaches-compute-resource-quota-pipelines_{context}"] -= Alternative approaches for limiting compute resource consumption in OpenShift Pipelines += Alternative approaches for limiting compute resource consumption in {pipelines-shortname} To attain some degree of control over the usage of compute resources by a pipeline, consider the following alternative approaches: diff --git a/modules/op-constructing-pipelines-using-pipeline-builder.adoc b/modules/op-constructing-pipelines-using-pipeline-builder.adoc index 9d6a83f55797..fd2e87d131e7 100644 --- a/modules/op-constructing-pipelines-using-pipeline-builder.adoc +++ b/modules/op-constructing-pipelines-using-pipeline-builder.adoc @@ -4,13 +4,13 @@ :_content-type: PROCEDURE [id="op-constructing-pipelines-using-pipeline-builder_{context}"] -= Constructing Pipelines using the Pipeline builder += Constructing pipelines using the Pipeline builder [role="_abstract"] In the *Developer* perspective of the console, you can use the *+Add* -> *Pipeline* -> *Pipeline builder* option to: * Configure pipelines using either the *Pipeline builder* or the *YAML view*. -* Construct a pipeline flow using existing tasks and cluster tasks. When you install the OpenShift Pipelines Operator, it adds reusable pipeline cluster tasks to your cluster. +* Construct a pipeline flow using existing tasks and cluster tasks. When you install the {pipelines-shortname} Operator, it adds reusable pipeline cluster tasks to your cluster. [IMPORTANT] ==== @@ -25,7 +25,7 @@ In {pipelines-title} 1.10, cluster task functionality is deprecated and is plann [IMPORTANT] ==== -In the developer perspective, you can create a customized pipeline using your own set of curated tasks. To search, install, and upgrade your tasks directly from the developer console, your cluster administrator needs to install and deploy a local Tekton Hub instance and link that hub to the OpenShift Container Platform cluster. For more details, see _Using Tekton Hub with OpenShift Pipelines_ in the _Additional resources_ section. +In the developer perspective, you can create a customized pipeline using your own set of curated tasks. To search, install, and upgrade your tasks directly from the developer console, your cluster administrator needs to install and deploy a local Tekton Hub instance and link that hub to the OpenShift Container Platform cluster. For more details, see _Using Tekton Hub with {pipelines-shortname}_ in the _Additional resources_ section. If you do not deploy any local Tekton Hub instance, by default, you can only access the cluster tasks, namespace tasks and public Tekton Hub tasks. ==== @@ -36,7 +36,7 @@ If you do not deploy any local Tekton Hub instance, by default, you can only acc + [NOTE] ==== -The *Pipeline builder* view supports a limited number of fields whereas the *YAML view* supports all available fields. Optionally, you can also use the Operator-installed, reusable snippets and samples to create detailed Pipelines. +The *Pipeline builder* view supports a limited number of fields whereas the *YAML view* supports all available fields. Optionally, you can also use the Operator-installed, reusable snippets and samples to create detailed pipelines. ==== + .YAML view diff --git a/modules/op-creating-pipelines-along-with-applications.adoc b/modules/op-creating-pipelines-along-with-applications.adoc new file mode 100644 index 000000000000..2a9b604c8211 --- /dev/null +++ b/modules/op-creating-pipelines-along-with-applications.adoc @@ -0,0 +1,15 @@ +// This module is included in the following assembly: +// +// *openshift_pipelines/working-with-pipelines-using-the-developer-perspective.adoc + +:_content-type: CONCEPT +[id="op-creating-pipelines-along-with-applications_{context}"] += Creating {pipelines-shortname} along with applications + +[role="_abstract"] +To create pipelines along with applications, use the *From Git* option in the *Add+* view of the *Developer* perspective. You can view all of your available pipelines and select the pipelines you want to use to create applications while importing your code or deploying an image. + +The Tekton Hub Integration is enabled by default and you can see tasks from the Tekton Hub that are supported by your cluster. Administrators can opt out of the Tekton Hub Integration and the Tekton Hub tasks will no longer be displayed. You can also check whether a webhook URL exists for a generated pipeline. Default webhooks are added for the pipelines that are created using the *+Add* flow and the URL is visible in the side panel of the selected resources in the Topology view. + +[role="_additional-resources"] +For more information, see xref:../../applications/creating_applications/odc-creating-applications-using-developer-perspective.adoc#odc-importing-codebase-from-git-to-create-application_odc-creating-applications-using-developer-perspective[Creating applications using the Developer perspective]. diff --git a/modules/op-customizing-pipelines-as-code-configuration.adoc b/modules/op-customizing-pipelines-as-code-configuration.adoc index 0177233ed543..adf767f2cfbe 100644 --- a/modules/op-customizing-pipelines-as-code-configuration.adoc +++ b/modules/op-customizing-pipelines-as-code-configuration.adoc @@ -15,7 +15,7 @@ To customize {pac}, cluster administrators can configure the following parameter | Parameter | Description | Default -| `application-name` | The name of the application. For example, the name displayed in the GitHub Checks labels. | `"Pipelines as Code CI"` +| `application-name` | The name of the application. For example, the name displayed in the GitHub Checks labels. | `"Pipelines as Code CI"` | `max-keep-days` | The number of the days for which the executed pipeline runs are kept in the `pipelines-as-code` namespace. @@ -29,7 +29,7 @@ Note that this `ConfigMap` setting does not affect the cleanups of a user's pipe | `hub-catalog-name` | The Tekton Hub catalog name. | `tekton` -| `tekton-dashboard-url` | The URL of the Tekton Hub dashboard. Pipelines as Code uses this URL to generate a `PipelineRun` URL on the Tekton Hub dashboard. | NA +| `tekton-dashboard-url` | The URL of the Tekton Hub dashboard. {pac} uses this URL to generate a `PipelineRun` URL on the Tekton Hub dashboard. | NA | `bitbucket-cloud-check-source-ip` | Indicates whether to secure the service requests by querying IP ranges for a public Bitbucket. Changing the parameter's default value might result into a security issue. | `enabled` @@ -39,7 +39,7 @@ Note that this `ConfigMap` setting does not affect the cleanups of a user's pipe | `default-max-keep-runs` | A default limit for the `max-keep-run` value for a pipeline run. If defined, the value is applied to all pipeline runs that do not have a `max-keep-run` annotation. | NA -| `auto-configure-new-github-repo` | Configures new GitHub repositories automatically. Pipelines as Code sets up a namespace and creates a custom resource for your repository. This parameter is only supported with GitHub applications. | `disabled` +| `auto-configure-new-github-repo` | Configures new GitHub repositories automatically. {pac} sets up a namespace and creates a custom resource for your repository. This parameter is only supported with GitHub applications. | `disabled` | `auto-configure-repo-namespace-template` | Configures a template to automatically generate the namespace for your new repository, if `auto-configure-new-github-repo` is enabled. | `{repo_name}-pipelines` diff --git a/modules/op-deleting-pipelines.adoc b/modules/op-deleting-pipelines.adoc index 22068ea21766..82b92a2e5e6a 100644 --- a/modules/op-deleting-pipelines.adoc +++ b/modules/op-deleting-pipelines.adoc @@ -4,9 +4,9 @@ :_content-type: PROCEDURE [id="op-deleting-pipelines_{context}"] -= Deleting Pipelines += Deleting pipelines -You can delete the Pipelines in your cluster using the *Developer* perspective of the web console. +You can delete the pipelines in your cluster using the *Developer* perspective of the web console. .Procedure . In the *Pipelines* view of the *Developer* perspective, click the *Options* {Kebab} menu adjoining a Pipeline, and select *Delete Pipeline*. diff --git a/modules/op-deleting-the-pipelines-component-and-custom-resources.adoc b/modules/op-deleting-the-pipelines-component-and-custom-resources.adoc index 01c5d0f778b5..99c5818f5fba 100644 --- a/modules/op-deleting-the-pipelines-component-and-custom-resources.adoc +++ b/modules/op-deleting-the-pipelines-component-and-custom-resources.adoc @@ -21,7 +21,7 @@ Delete the Custom Resources (CRs) created by default during installation of the + [NOTE] ==== -Deleting the CRs will delete the {pipelines-title} components, and all the Tasks and Pipelines on the cluster will be lost. +Deleting the CRs will delete the {pipelines-title} components, and all the tasks and pipelines on the cluster will be lost. ==== . Click *Delete* to confirm the deletion of the CRs. diff --git a/modules/op-editing-pipelines.adoc b/modules/op-editing-pipelines.adoc index 53f6bb15a041..559a33f48408 100644 --- a/modules/op-editing-pipelines.adoc +++ b/modules/op-editing-pipelines.adoc @@ -4,16 +4,16 @@ :_content-type: PROCEDURE [id="op-editing-pipelines_{context}"] -= Editing Pipelines += Editing pipelines -You can edit the Pipelines in your cluster using the *Developer* perspective of the web console: +You can edit the pipelines in your cluster using the *Developer* perspective of the web console: .Procedure -. In the *Pipelines* view of the *Developer* perspective, select the Pipeline you want to edit to see the details of the Pipeline. +. In the *Pipelines* view of the *Developer* perspective, select the pipeline you want to edit to see the details of the pipeline. In the *Pipeline Details* page, click *Actions* and select *Edit Pipeline*. . On the *Pipeline builder* page, you can perform the following tasks: -* Add additional Tasks, parameters, or resources to the Pipeline. -* Click the Task you want to modify to see the Task details in the side panel and modify the required Task details, such as the display name, parameters, and resources. -* Alternatively, to delete the Task, click the Task, and in the side panel, click *Actions* and select *Remove Task*. -. Click *Save* to save the modified Pipeline. +* Add additional tasks, parameters, or resources to the pipeline. +* Click the task you want to modify to see the task details in the side panel and modify the required task details, such as the display name, parameters, and resources. +* Alternatively, to delete the task, click the task, and in the side panel, click *Actions* and select *Remove Task*. +. Click *Save* to save the modified pipeline. diff --git a/modules/op-installing-pipelines-as-code-on-an-openshift-cluster.adoc b/modules/op-installing-pipelines-as-code-on-an-openshift-cluster.adoc index c68b870b5043..478c0d997f95 100644 --- a/modules/op-installing-pipelines-as-code-on-an-openshift-cluster.adoc +++ b/modules/op-installing-pipelines-as-code-on-an-openshift-cluster.adoc @@ -7,7 +7,7 @@ = Installing {pac} on an {product-title} [role="_abstract"] -{pac} is installed in the `openshift-pipelines` namespace when you install the {pipelines-title} Operator. For more details, see _Installing OpenShift Pipelines_ in the _Additional resources_ section. +{pac} is installed in the `openshift-pipelines` namespace when you install the {pipelines-title} Operator. For more details, see _Installing {pipelines-shortname}_ in the _Additional resources_ section. To disable the default installation of {pac} with the Operator, set the value of the `enable` parameter to `false` in the `TektonConfig` custom resource. diff --git a/modules/op-installing-pipelines-operator-in-web-console.adoc b/modules/op-installing-pipelines-operator-in-web-console.adoc index 7a81871c97cc..db16a1f8442c 100644 --- a/modules/op-installing-pipelines-operator-in-web-console.adoc +++ b/modules/op-installing-pipelines-operator-in-web-console.adoc @@ -7,10 +7,10 @@ You can install {pipelines-title} using the Operator listed in the {product-title} OperatorHub. When you install the {pipelines-title} Operator, the custom resources (CRs) required for the pipelines configuration are automatically installed along with the Operator. -The default Operator custom resource definition (CRD) `config.operator.tekton.dev` is now replaced by `tektonconfigs.operator.tekton.dev`. In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: +The default Operator custom resource definition (CRD) `config.operator.tekton.dev` is now replaced by `tektonconfigs.operator.tekton.dev`. In addition, the Operator provides the following additional CRDs to individually manage {pipelines-shortname} components: `tektonpipelines.operator.tekton.dev`, `tektontriggers.operator.tekton.dev` and `tektonaddons.operator.tekton.dev`. -If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of `config.operator.tekton.dev` on your cluster with an instance of `tektonconfigs.operator.tekton.dev` and additional objects of the other CRDs as necessary. +If you have {pipelines-shortname} already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of `config.operator.tekton.dev` on your cluster with an instance of `tektonconfigs.operator.tekton.dev` and additional objects of the other CRDs as necessary. [WARNING] ==== diff --git a/modules/op-installing-pipelines-operator-using-the-cli.adoc b/modules/op-installing-pipelines-operator-using-the-cli.adoc index 2b1936ce2b3b..ec57612a13f1 100644 --- a/modules/op-installing-pipelines-operator-using-the-cli.adoc +++ b/modules/op-installing-pipelines-operator-using-the-cli.adoc @@ -4,7 +4,7 @@ :_content-type: PROCEDURE [id="op-installing-pipelines-operator-using-the-cli_{context}"] -= Installing the OpenShift Pipelines Operator using the CLI += Installing the {pipelines-shortname} Operator using the CLI You can install {pipelines-title} Operator from the OperatorHub using the CLI. diff --git a/modules/op-mirroring-images-to-run-pipelines-in-restricted-environment.adoc b/modules/op-mirroring-images-to-run-pipelines-in-restricted-environment.adoc index 56002cdda7ec..d4393bdde754 100644 --- a/modules/op-mirroring-images-to-run-pipelines-in-restricted-environment.adoc +++ b/modules/op-mirroring-images-to-run-pipelines-in-restricted-environment.adoc @@ -7,7 +7,7 @@ = Mirroring images to run pipelines in a restricted environment -To run OpenShift Pipelines in a disconnected cluster or a cluster provisioned in a restricted environment, ensure that either the Samples Operator is configured for a restricted network, or a cluster administrator has created a cluster with a mirrored registry. +To run {pipelines-shortname} in a disconnected cluster or a cluster provisioned in a restricted environment, ensure that either the Samples Operator is configured for a restricted network, or a cluster administrator has created a cluster with a mirrored registry. The following procedure uses the `pipelines-tutorial` example to create a pipeline for an application in a restricted environment using a cluster with a mirrored registry. To ensure that the `pipelines-tutorial` example works in a restricted environment, you must mirror the respective builder images from the mirror registry for the front-end interface, `pipelines-vote-ui`; back-end interface, `pipelines-vote-api`; and the `cli`. diff --git a/modules/op-pipelines-as-code-command-reference.adoc b/modules/op-pipelines-as-code-command-reference.adoc index c1fa7affd46b..8a2c5f6141de 100644 --- a/modules/op-pipelines-as-code-command-reference.adoc +++ b/modules/op-pipelines-as-code-command-reference.adoc @@ -77,7 +77,7 @@ If you do not have an {product-title} cluster, it asks you for the public URL th === generate -.Generating pipeline runs using Pipelines as Code +.Generating pipeline runs using {pac} [options="header"] |=== diff --git a/modules/op-release-notes-1-0.adoc b/modules/op-release-notes-1-0.adoc index 28200f6a74e1..a7944188165f 100644 --- a/modules/op-release-notes-1-0.adoc +++ b/modules/op-release-notes-1-0.adoc @@ -116,12 +116,12 @@ Alternatively, you can also modify the `buildah` cluster task YAML file directly * Previously, the `DeploymentConfig` task triggered a new deployment build even when an image build was already in progress. This caused the deployment of the pipeline to fail. With this fix, the `deploy task` command is now replaced with the `oc rollout status` command which waits for the in-progress deployment to finish. * Support for `APP_NAME` parameter is now added in pipeline templates. * Previously, the pipeline template for Java S2I failed to look up the image in the registry. With this fix, the image is looked up using the existing image pipeline resources instead of the user provided `IMAGE_NAME` parameter. -* All the OpenShift Pipelines images are now based on the Red Hat Universal Base Images (UBI). +* All the {pipelines-shortname} images are now based on the Red Hat Universal Base Images (UBI). * Previously, when the pipeline was installed in a namespace other than `tekton-pipelines`, the `tkn version` command displayed the pipeline version as `unknown`. With this fix, the `tkn version` command now displays the correct pipeline version in any namespace. * The `-c` flag is no longer supported for the `tkn version` command. * Non-admin users can now list the cluster trigger bindings. * The event listener `CompareSecret` function is now fixed for the CEL Interceptor. * The `list`, `describe`, and `start` subcommands for tasks and cluster tasks now correctly display the output in case a task and cluster task have the same name. -* Previously, the OpenShift Pipelines Operator modified the privileged security context constraints (SCCs), which caused an error during cluster upgrade. This error is now fixed. +* Previously, the {pipelines-shortname} Operator modified the privileged security context constraints (SCCs), which caused an error during cluster upgrade. This error is now fixed. * In the `tekton-pipelines` namespace, the timeouts of all task runs and pipeline runs are now set to the value of `default-timeout-minutes` field using the config map. * Previously, the pipelines section in the web console was not displayed for non-admin users. This issue is now resolved. diff --git a/modules/op-release-notes-1-1.adoc b/modules/op-release-notes-1-1.adoc index 852bd112124d..0e25fd9946c1 100644 --- a/modules/op-release-notes-1-1.adoc +++ b/modules/op-release-notes-1-1.adoc @@ -19,7 +19,7 @@ In addition to the fixes and stability improvements, the following sections high [id="pipeline-new-features-1-1_{context}"] === Pipelines -* Workspaces can now be used instead of pipeline resources. It is recommended that you use workspaces in OpenShift Pipelines, as pipeline resources are difficult to debug, limited in scope, and make tasks less reusable. For more details on workspaces, see the Understanding OpenShift Pipelines section. +* Workspaces can now be used instead of pipeline resources. It is recommended that you use workspaces in {pipelines-shortname}, as pipeline resources are difficult to debug, limited in scope, and make tasks less reusable. For more details on workspaces, see the Understanding {pipelines-shortname} section. * Workspace support for volume claim templates has been added: ** The volume claim template for a pipeline run and task run can now be added as a volume source for workspaces. The tekton-controller then creates a persistent volume claim (PVC) using the template that is seen as a PVC for all task runs in the pipeline. Thus you do not need to define the PVC configuration every time it binds a workspace that spans multiple tasks. ** Support to find the name of the PVC when a volume claim template is used as a volume source is now available using variable substitution. @@ -32,7 +32,7 @@ In addition to the fixes and stability improvements, the following sections high * The kube config writer now adds the `ClientKeyData` and the `ClientCertificateData` configurations in the resource structure to enable replacement of the pipeline resource type cluster with the kubeconfig-creator task. * The names of the `feature-flags` and the `config-defaults` config maps are now customizable. * Support for the host network in the pod template used by the task run is now available. -* An Affinity Assistant is now available to support node affinity in task runs that share workspace volume. By default, this is disabled on OpenShift Pipelines. +* An Affinity Assistant is now available to support node affinity in task runs that share workspace volume. By default, this is disabled on {pipelines-shortname}. * The pod template has been updated to specify `imagePullSecrets` to identify secrets that the container runtime should use to authorize container image pulls when starting a pod. * Support for emitting warning events from the task run controller if the controller fails to update the task run. * Standard or recommended k8s labels have been added to all resources to identify resources belonging to an application or component. diff --git a/modules/op-release-notes-1-6.adoc b/modules/op-release-notes-1-6.adoc index 4c32d9b9a147..a1baad9deac0 100644 --- a/modules/op-release-notes-1-6.adoc +++ b/modules/op-release-notes-1-6.adoc @@ -113,7 +113,7 @@ The `Cancelled` status replaces the deprecated `PipelineRunCancelled` status, wh ** To configure node selection for the Operator's controller and webhook deployment, you edit the `config.nodeSelector` and `config.tolerations` fields in the specification for the `Subscription` CR, after installing the Operator. -** To deploy the rest of the control plane pods of OpenShift Pipelines on an infrastructure node, update the `TektonConfig` CR with the `nodeSelector` and `tolerations` fields. The modifications are then applied to all the pods created by Operator. +** To deploy the rest of the control plane pods of {pipelines-shortname} on an infrastructure node, update the `TektonConfig` CR with the `nodeSelector` and `tolerations` fields. The modifications are then applied to all the pods created by Operator. [id="deprecated-features-1-6_{context}"] @@ -289,10 +289,10 @@ Error from server (InternalError): Internal error occurred: failed calling webho [id="fixed-issues-1-6-2_{context}"] === Fixed issues -* Before this update, multiple instances of Tekton installer sets were created for a pipeline after upgrading to Red Hat OpenShift Pipelines 1.6.1 from an older version. With this update, the Operator ensures that only one instance of each type of `TektonInstallerSet` exists after an upgrade. +* Before this update, multiple instances of Tekton installer sets were created for a pipeline after upgrading to {pipelines-title} 1.6.1 from an older version. With this update, the Operator ensures that only one instance of each type of `TektonInstallerSet` exists after an upgrade. // https://issues.redhat.com/browse/SRVKP-1926 -* Before this update, all the reconcilers in the Operator used the component version to decide resource recreation during an upgrade to Red Hat OpenShift Pipelines 1.6.1 from an older version. As a result, those resources were not recreated whose component versions did not change in the upgrade. With this update, the Operator uses the Operator version instead of the component version to decide resource recreation during an upgrade. +* Before this update, all the reconcilers in the Operator used the component version to decide resource recreation during an upgrade to {pipelines-title} 1.6.1 from an older version. As a result, those resources were not recreated whose component versions did not change in the upgrade. With this update, the Operator uses the Operator version instead of the component version to decide resource recreation during an upgrade. // https://issues.redhat.com/browse/SRVKP-1928 * Before this update, the pipelines webhook service was missing in the cluster after an upgrade. This was due to an upgrade deadlock on the config maps. With this update, a mechanism is added to disable webhook validation if the config maps are absent in the cluster. As a result, the pipelines webhook service persists in the cluster after an upgrade. diff --git a/modules/op-release-notes-1-9.adoc b/modules/op-release-notes-1-9.adoc index f37cb0085337..77c4518ae2ec 100644 --- a/modules/op-release-notes-1-9.adoc +++ b/modules/op-release-notes-1-9.adoc @@ -259,22 +259,22 @@ If you do not add any configuration data, you can use the default data in the AP [id="deprecated-features-1-9_{context}"] == Deprecated and removed features -* In the Red Hat OpenShift Pipelines 1.9.0 release, `ClusterTasks` are deprecated and planned to be removed in a future release. As an alternative, you can use `Cluster Resolver`. +* In the {pipelines-title} 1.9.0 release, `ClusterTasks` are deprecated and planned to be removed in a future release. As an alternative, you can use `Cluster Resolver`. -* In the Red Hat OpenShift Pipelines 1.9.0 release, the use of the `triggers` and the `namespaceSelector` fields in a single `EventListener` specification is deprecated and planned to be removed in a future release. You can use these fields in different `EventListener` specifications successfully. +* In the {pipelines-title} 1.9.0 release, the use of the `triggers` and the `namespaceSelector` fields in a single `EventListener` specification is deprecated and planned to be removed in a future release. You can use these fields in different `EventListener` specifications successfully. -* In the Red Hat OpenShift Pipelines 1.9.0 release, the `tkn pipelinerun describe` command does not display timeouts for the `PipelineRun` resource. +* In the {pipelines-title} 1.9.0 release, the `tkn pipelinerun describe` command does not display timeouts for the `PipelineRun` resource. -* In the Red Hat OpenShift Pipelines 1.9.0 release, the PipelineResource` custom resource (CR) is deprecated. The `PipelineResource` CR was a Tech Preview feature and part of the `tekton.dev/v1alpha1` API. +* In the {pipelines-title} 1.9.0 release, the PipelineResource` custom resource (CR) is deprecated. The `PipelineResource` CR was a Tech Preview feature and part of the `tekton.dev/v1alpha1` API. -* In the Red Hat OpenShift Pipelines 1.9.0 release, custom image parameters from cluster tasks are deprecated. As an alternative, you can copy a cluster task and use your custom image in it. +* In the {pipelines-title} 1.9.0 release, custom image parameters from cluster tasks are deprecated. As an alternative, you can copy a cluster task and use your custom image in it. [id="known-issues-1-9_{context}"] == Known issues // .Operator -* The `chains-secret` and `chains-config` config maps are removed after you uninstall the Red Hat OpenShift Pipelines Operator. As they contain user data, they should be preserved and not deleted. +* The `chains-secret` and `chains-config` config maps are removed after you uninstall the {pipelines-title} Operator. As they contain user data, they should be preserved and not deleted. // https://issues.redhat.com/browse/SRVKP-2396 // .PAC @@ -351,7 +351,7 @@ spec: * Before this update, if namespaces were removed from the cluster, then the operator did not remove namespaces from the `ClusterInterceptor ClusterRoleBinding` subjects. With this update, this issue has been resolved, and the operator removes the namespaces from the `ClusterInterceptor ClusterRoleBinding` subjects. // Shubham Minglani -* Before this update, the default installation of the Red Hat OpenShift Pipelines Operator resulted in the `pipelines-scc-rolebinding security context constraint` (SCC) role binding resource remaining in the cluster. With this update, the default installation of the Red Hat OpenShift Pipelines Operator results in the `pipelines-scc-rolebinding security context constraint` (SCC) role binding resource resource being removed from the cluster. +* Before this update, the default installation of the {pipelines-title} Operator resulted in the `pipelines-scc-rolebinding security context constraint` (SCC) role binding resource remaining in the cluster. With this update, the default installation of the {pipelines-title} Operator results in the `pipelines-scc-rolebinding security context constraint` (SCC) role binding resource resource being removed from the cluster. // https://github.com/tektoncd/operator/pull/1156 // https://issues.redhat.com/browse/SRVKP-2520 // Shubham Minglani diff --git a/modules/op-uninstalling-the-pipelines-operator.adoc b/modules/op-uninstalling-the-pipelines-operator.adoc index 1fe66b2ed583..5d099cadbada 100644 --- a/modules/op-uninstalling-the-pipelines-operator.adoc +++ b/modules/op-uninstalling-the-pipelines-operator.adoc @@ -10,6 +10,6 @@ .Procedure . From the *Operators* -> *OperatorHub* page, use the *Filter by keyword* box to search for `{pipelines-title} Operator`. -. Click the *OpenShift Pipelines Operator* tile. The Operator tile indicates it is installed. +. Click the *{pipelines-shortname} Operator* tile. The Operator tile indicates it is installed. -. In the *OpenShift Pipelines Operator* descriptor page, click *Uninstall*. +. In the *{pipelines-shortname} Operator* descriptor page, click *Uninstall*. diff --git a/modules/op-using-custom-pipeline-template-for-git-import.adoc b/modules/op-using-custom-pipeline-template-for-git-import.adoc index 4f42bbd4ff20..7d3664da60fb 100644 --- a/modules/op-using-custom-pipeline-template-for-git-import.adoc +++ b/modules/op-using-custom-pipeline-template-for-git-import.adoc @@ -22,7 +22,7 @@ Ensure that the {pipelines-title} 1.5 or later is installed and available in all .Procedure . Log in to the {product-title} web console as a cluster administrator. -. In the *Administrator* perspective, use the left navigation panel to go to the _Pipelines_ section. +. In the *Administrator* perspective, use the left navigation panel to go to the *Pipelines* section. .. From the *Project* drop-down, select the *openshift* project. This ensures that the subsequent steps are performed in the `openshift` namespace. .. From the list of available pipelines, select a pipeline that is appropriate for building and deploying your application. For example, if your application requires a `node.js` runtime environment, select the *s2i-nodejs* pipeline. + @@ -68,7 +68,7 @@ As a cluster admin, you can disable the installation of the default pipeline tem + . Create a custom pipeline template: -.. Use the left navigation panel to go to the _Pipelines_ section. +.. Use the left navigation panel to go to the *Pipelines* section. .. From the *Create* drop-down, select *Pipeline*. .. Create the required pipeline in the `openshift` namespace. Give it a different name than the default one (for example, `custom-nodejs`). You can use the downloaded default pipeline template as a starting point and customize it. + diff --git a/modules/op-using-pipelines-as-code-with-a-github-app.adoc b/modules/op-using-pipelines-as-code-with-a-github-app.adoc index a09afad109ad..8efb5219f438 100644 --- a/modules/op-using-pipelines-as-code-with-a-github-app.adoc +++ b/modules/op-using-pipelines-as-code-with-a-github-app.adoc @@ -7,7 +7,7 @@ = Using {pac} with a GitHub App [role="_abstract"] -GitHub Apps act as a point of integration with {pipelines-title} and bring the advantage of Git-based workflows to OpenShift Pipelines. Cluster administrators can configure a single GitHub App for all cluster users. For GitHub Apps to work with Pipelines as Code, ensure that the webhook of the GitHub App points to the Pipelines as Code event listener route (or ingress endpoint) that listens for GitHub events. +GitHub Apps act as a point of integration with {pipelines-title} and bring the advantage of Git-based workflows to {pipelines-shortname}. Cluster administrators can configure a single GitHub App for all cluster users. For GitHub Apps to work with {pac}, ensure that the webhook of the GitHub App points to the {pac} event listener route (or ingress endpoint) that listens for GitHub events. [id="configuring-github-app-for-pac"] == Configuring a GitHub App @@ -31,7 +31,7 @@ To create and configure a GitHub App manually for {pac}, perform the following s . Provide the following information in the GitHub App form: -* **GitHub Application Name**: `OpenShift Pipelines` +* **GitHub Application Name**: `{pipelines-shortname}` * **Homepage URL**: OpenShift Console URL * **Webhook URL**: The {pac} route or ingress URL. You can find it by running the command `echo https://$(oc get route -n openshift-pipelines pipelines-as-code-controller -o jsonpath='{.spec.host}')`. * **Webhook secret**: An arbitrary secret. You can generate a secret by executing the command `openssl rand -hex 20`. @@ -62,7 +62,7 @@ To create and configure a GitHub App manually for {pac}, perform the following s . In the **Private keys** section, click **Generate Private key** to automatically generate and download a private key for the GitHub app. Securely store the private key for future reference and usage. -. Install the created App on a repository that you want to use with Pipelines as Code. +. Install the created App on a repository that you want to use with {pac}. [id="configuring-pac-for-github-app"]