Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions cicd/pipelines/using-pipelines-as-code.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,18 @@ include::_attributes/common-attributes.adoc[]

toc::[]

// :FeatureName: Pipelines as Code

// :FeatureName: Pipelines as Code
[role="_abstract"]
With {pac}, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, the feature runs the pipeline and reports the status.
With {pac}, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, {pac} runs the pipeline and reports the status.

[id="pac-key-features"]
== Key features
{pac} supports the following features:

* Pull request status and control on the platform hosting the Git repository.
* GitHub Checks API to set the status of a pipeline run, including rechecks.
* GitHub pull request and commit events.
* GitHub pull request and commit events.
* Pull request actions in comments, such as `/retest`.
* Git events filtering and a separate pipeline for each event.
* Automatic task resolution in {pipelines-shortname}, including local tasks, Tekton Hub, and remote URLs.
Expand All @@ -31,7 +31,7 @@ include::modules/op-installing-pipelines-as-code-on-an-openshift-cluster.adoc[le
include::modules/op-installing-pipelines-as-code-cli.adoc[leveloffset=+1]

[id="using-pipelines-as-code-with-a-git-repository-hosting-service-provider"]
== Using {pac} with a Git repository hosting service provider
== Using {pac} with a Git repository hosting service provider

[role="_abstract"]
After installing {pac}, cluster administrators can configure a Git repository hosting service provider. Currently, the following services are supported:
Expand All @@ -51,6 +51,8 @@ include::modules/op-using-pipelines-as-code-with-a-github-app.adoc[leveloffset=+

include::modules/op-creating-a-github-application-in-administrator-perspective.adoc[leveloffset=+2]

include::modules/op-scoping-github-token.adoc[leveloffset=+2]

include::modules/op-using-pipelines-as-code-with-github-webhook.adoc[leveloffset=+1]

.Additional resources
Expand Down Expand Up @@ -90,6 +92,8 @@ include::modules/op-using-repository-crd-with-pipelines-as-code.adoc[leveloffset

include::modules/op-setting-concurrency-limits-in-repository-crd.adoc[leveloffset=+2]

include::modules/op-changing-source-branch-in-repository-crd.adoc[leveloffset=+2]

include::modules/op-custom-parameter-expansion.adoc[leveloffset=+2]

include::modules/op-using-pipelines-as-code-resolver.adoc[leveloffset=+1]
Expand Down
29 changes: 29 additions & 0 deletions modules/op-changing-source-branch-in-repository-crd.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
// This module is included in the following assembly:
//
// *cicd/pipelines/using-pipelines-as-code.adoc

:_content-type: REFERENCE
[id="changing-source-branch-in-repository-crd_{context}"]
= Changing the source branch for the pipeline definition

[role="_abstract"]
By default, when processing a push event or a pull request event, {pac} fetches the pipeline definition from the branch that triggered the event. You can use the `pipelinerun_provenance` setting in the `Repository` custom resource definition (CRD) to fetch the definition from the default branch configured on the Git repository provider, such as `main`, `master`, or `trunk`.

[source,yaml]
----
apiVersion: "pipelinesascode.tekton.dev/v1alpha1"
kind: Repository
metadata:
name: my-repo
namespace: target-namespace
spec:
# ...
settings:
pipelinerun_provenance: "default_branch"
# ...
----

[NOTE]
====
You can use this setting as a security precaution. With the default behaviour, {pac} uses the pipeline definition in the submitted pull request. With the `default-branch` setting, the pipeline definition must be merged into the default branch before it is run. This requirement ensures maximum possible verification of any changes during merge review.
====
23 changes: 14 additions & 9 deletions modules/op-customizing-pipelines-as-code-configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@

:_content-type: REFERENCE
[id="customizing-pipelines-as-code-configuration_{context}"]
= Customizing {pac} configuration
= Customizing {pac} configuration

[role="_abstract"]
To customize {pac}, cluster administrators can configure the following parameters using the `pipelines-as-code` config map in the `openshift-pipelines` namespace:
To customize {pac}, cluster administrators can configure the following parameters in the `TektonConfig` custom resource, in the `pipelinesAsCode.settings` spec:

.Customizing {pac} configuration
[options="header"]
Expand All @@ -17,11 +17,7 @@ To customize {pac}, cluster administrators can configure the following parameter

| `application-name` | The name of the application. For example, the name displayed in the GitHub Checks labels. | `"Pipelines as Code CI"`

| `max-keep-days` | The number of the days for which the executed pipeline runs are kept in the `openshift-pipelines` namespace.

Note that this `ConfigMap` setting does not affect the cleanups of a user's pipeline runs, which are controlled by the annotations on the pipeline run definition in the user's GitHub repository. | NA

| `secret-auto-create` | Indicates whether or not a secret should be automatically created using the token generated in the GitHub application. This secret can then be used with private repositories. | `enabled`
| `secret-auto-create` | Indicates whether or not a secret should be automatically created using the token generated in the GitHub application. This secret can then be used with private repositories. | `enabled`

| `remote-tasks` | When enabled, allows remote tasks from pipeline run annotations. | `enabled`

Expand All @@ -43,6 +39,15 @@ Note that this `ConfigMap` setting does not affect the cleanups of a user's pipe

| `auto-configure-repo-namespace-template` | Configures a template to automatically generate the namespace for your new repository, if `auto-configure-new-github-repo` is enabled. | `{repo_name}-pipelines`

| `error-log-snippet` | Enables or disables the view of a log snippet for the failed tasks, with an error in a pipeline. You can disable this parameter in the case of data leakage from your pipeline. | `enabled`
| `error-log-snippet` | Enables or disables the view of a log snippet for the failed tasks, with an error in a pipeline. You can disable this parameter in the case of data leakage from your pipeline. | `true`

| `error-detection-from-container-logs` | Enables or disables the inspection of container logs to detect error message and expose them as annotations on the pull request. This setting applies only if you are using the GitHub app. | `true`

| `error-detection-max-number-of-lines` | The maximum number of lines inspected in the container logs to search for error messages. Set to `-1` to inspect an unlimited number of lines. | 50

|===
| `secret-github-app-token-scoped` | If set to `true`, the GitHub access token that {pac} generates using the GitHub app is scoped only to the repository from which {pac} fetches the pipeline definition. If set to `false`, you can use both the `TektonConfig` custom resource and the `Repository` custom resource to scope the token to additional repositories. | `true`

| `secret-github-app-scope-extra-repos` | Additional repositories for scoping the generated GitHub access token. |


|===
Original file line number Diff line number Diff line change
Expand Up @@ -9,25 +9,28 @@
[role="_abstract"]
{pac} is installed in the `openshift-pipelines` namespace when you install the {pipelines-title} Operator. For more details, see _Installing {pipelines-shortname}_ in the _Additional resources_ section.

To disable the default installation of {pac} with the Operator, set the value of the `enable` parameter to `false` in the `TektonConfig` custom resource.
To disable the default installation of {pac} with the Operator, set the value of the `enable` parameter to `false` in the `TektonConfig` custom resource.

[source,yaml]
----
...
spec:
platforms:
openshift:
pipelinesAsCode:
enable: false
settings:
application-name: Pipelines as Code CI
auto-configure-new-github-repo: "false"
bitbucket-cloud-check-source-ip: "true"
hub-catalog-name: tekton
hub-url: https://api.hub.tekton.dev/v1
remote-tasks: "true"
secret-auto-create: "true"
...
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
platforms:
openshift:
pipelinesAsCode:
enable: false
settings:
application-name: Pipelines as Code CI
auto-configure-new-github-repo: "false"
bitbucket-cloud-check-source-ip: "true"
hub-catalog-name: tekton
hub-url: https://api.hub.tekton.dev/v1
remote-tasks: "true"
secret-auto-create: "true"
# ...
----

Optionally, you can run the following command:
Expand All @@ -41,24 +44,27 @@ To enable the default installation of {pac} with the {pipelines-title} Operator,

[source,yaml]
----
...
spec:
platforms:
openshift:
pipelinesAsCode:
enable: true
settings:
application-name: Pipelines as Code CI
auto-configure-new-github-repo: "false"
bitbucket-cloud-check-source-ip: "true"
hub-catalog-name: tekton
hub-url: https://api.hub.tekton.dev/v1
remote-tasks: "true"
secret-auto-create: "true"
...
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
platforms:
openshift:
pipelinesAsCode:
enable: true
settings:
application-name: Pipelines as Code CI
auto-configure-new-github-repo: "false"
bitbucket-cloud-check-source-ip: "true"
hub-catalog-name: tekton
hub-url: https://api.hub.tekton.dev/v1
remote-tasks: "true"
secret-auto-create: "true"
# ...
----

Optionally, you can run the following command:
Optionally, you can run the following command:

[source,terminal]
----
Expand Down
19 changes: 9 additions & 10 deletions modules/op-installing-pipelines-operator-in-web-console.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ If you have {pipelines-shortname} already installed on your cluster, the existin
If you manually changed your existing installation, such as, changing the target namespace in the `config.operator.tekton.dev` CRD instance by making changes to the `resource name - cluster` field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the {pipelines-title} Operator.
====

The {pipelines-title} Operator now provides the option to choose the components that you want to install by specifying profiles as part of the `TektonConfig` CR. The `TektonConfig` CR is automatically installed when the Operator is installed.
The {pipelines-title} Operator now provides the option to choose the components that you want to install by specifying profiles as part of the `TektonConfig` custom resource (CR). The `TektonConfig` CR is automatically installed when the Operator is installed.
The supported profiles are:

* Lite: This installs only Tekton Pipelines.
* Basic: This installs Tekton Pipelines and Tekton Triggers.
* All: This is the default profile used when the `TektonConfig` CR is installed. This profile installs all of the Tekton components: Tekton Pipelines, Tekton Triggers, Tekton Addons (which include `ClusterTasks`, `ClusterTriggerBindings`, `ConsoleCLIDownload`, `ConsoleQuickStart` and `ConsoleYAMLSample` resources).
* Basic: This installs Tekton Pipelines, Tekton Triggers, and Tekton Chains.
* All: This is the default profile used when the `TektonConfig` CR is installed. This profile installs all of the Tekton components, including Tekton Pipelines, Tekton Triggers, Tekton Chains, {pac}, and Tekton Addons. Tekton Addons includes the `ClusterTasks`, `ClusterTriggerBindings`, `ConsoleCLIDownload`, `ConsoleQuickStart`, and `ConsoleYAMLSample` resources.

[discrete]
.Procedure
Expand All @@ -47,7 +47,7 @@ The supported profiles are:
[NOTE]
====
Starting with {product-title} 4.11, the `preview` and `stable` channels for installing and upgrading the {pipelines-title} Operator are not available. However, in {product-title} 4.10 and earlier versions, you can use the `preview` and `stable` channels for installing and upgrading the Operator.
====
====

. Click *Install*. You will see the Operator listed on the *Installed Operators* page.
+
Expand All @@ -74,7 +74,7 @@ $ oc get tektonconfig config
.Example output
----
NAME VERSION READY REASON
config 1.9.2 True
config 1.9.2 True
----
+
If the *READY* condition is *True*, the Operator and its components have been installed successfully.
Expand All @@ -89,12 +89,11 @@ $ oc get tektonpipeline,tektontrigger,tektonaddon,pac
.Example output
----
NAME VERSION READY REASON
tektonpipeline.operator.tekton.dev/pipeline v0.41.1 True
tektonpipeline.operator.tekton.dev/pipeline v0.41.1 True
NAME VERSION READY REASON
tektontrigger.operator.tekton.dev/trigger v0.22.2 True
tektontrigger.operator.tekton.dev/trigger v0.22.2 True
NAME VERSION READY REASON
tektonaddon.operator.tekton.dev/addon 1.9.2 True
tektonaddon.operator.tekton.dev/addon 1.9.2 True
NAME VERSION READY REASON
openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code v0.15.5 True
openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code v0.15.5 True
----

Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

:_content-type: REFERENCE
[id="monitoring-pipeline-run-status-using-pipelines-as-code_{context}"]
= Monitoring pipeline run status using {pac}
= Monitoring pipeline run status using {pac}

[role="_abstract"]
Depending on the context and supported tools, you can monitor the status of a pipeline run in different ways.
Expand All @@ -25,20 +25,18 @@ When {pac} detects an error in one of the tasks of a pipeline, a small snippet c
[discrete]
.Annotations for log error snippets

In the {pac} config map, if you set the `error-detection-from-container-logs` parameter to `true`, {pac} detects the errors from the container logs and adds them as annotations on the pull request where the error occurred.
In the `TektonConfig` custom resource, in the `pipelinesAsCode.settings` spec, you can set the `error-detection-from-container-logs` parameter to `true`. In this case, {pac} detects the errors from the container logs and adds them as annotations on the pull request where the error occurred.

[IMPORTANT]
====
This feature is in Technology Preview.
====
:FeatureName: Adding annotations for log error snippets
include::snippets/technology-preview.adoc[]

Currently, {pac} supports only the simple cases where the error looks like `makefile` or `grep` output of the following format:
[source,yaml]
----
<filename>:<line>:<column>: <error message>
----

You can customize the regular expression used to detect the errors with the `error-detection-simple-regexp` field. The regular expression uses named groups to give flexibility on how to specify the matching. The groups needed to match are filename, line, and error. You can view the {pac} config map for the default regular expression.
You can customize the regular expression used to detect the errors with the `error-detection-simple-regexp` parameter. The regular expression uses named groups to give flexibility on how to specify the matching. The groups needed to match are `filename`, `line`, and `error`. You can view the {pac} config map for the default regular expression.

[NOTE]
====
Expand All @@ -51,7 +49,7 @@ For webhook, when the event is a pull request, the status is added as a comment

[discrete]
.Failures
If a namespace is matched to a `Repository` CRD, {pac} emits its failure log messages in the Kubernetes events inside the namespace.
If a namespace is matched to a `Repository` custom resource definition (CRD), {pac} emits its failure log messages in the Kubernetes events inside the namespace.

[discrete]
.Status associated with Repository CRD
Expand All @@ -73,4 +71,3 @@ Using the `tkn pac describe` command, you can extract the status of the runs ass
[discrete]
.Notifications
{pac} does not manage notifications. If you need to have notifications, use the `finally` feature of pipelines.

4 changes: 2 additions & 2 deletions modules/op-running-pipeline-run-using-pipelines-as-code.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

:_content-type: REFERENCE
[id="running-pipeline-run-using-pipelines-as-code_{context}"]
= Running a pipeline run using {pac}
= Running a pipeline run using {pac}

[role="_abstract"]
With default configuration, {pac} runs any pipeline run in the `.tekton/` directory of the default branch of repository, when specified events such as pull request or push occurs on the repository. For example, if a pipeline run on the default branch has the annotation `pipelinesascode.tekton.dev/on-event: "[pull_request]"`, it will run whenever a pull request event occurs.
Expand All @@ -28,7 +28,7 @@ If the pull request author does not meet the requirements, another user who meet

[discrete]
.Pipeline run execution
A pipeline run always runs in the namespace of the `Repository` CRD associated with the repository that generated the event.
A pipeline run always runs in the namespace of the `Repository` custom resource definition (CRD) associated with the repository that generated the event.

You can observe the execution of your pipeline runs using the `tkn pac` CLI tool.

Expand Down
Loading