diff --git a/modules/op-release-notes-1-16.adoc b/modules/op-release-notes-1-16.adoc new file mode 100644 index 000000000000..2b161890f4a0 --- /dev/null +++ b/modules/op-release-notes-1-16.adoc @@ -0,0 +1,615 @@ +// This module is included in the following assemblies: +// * release_notes/op-release-notes-1-16.adoc + +:_mod-docs-content-type: REFERENCE +[id="op-release-notes-1-16_{context}"] += Release notes for {pipelines-title} General Availability 1.16 + +With this update, {pipelines-title} General Availability (GA) 1.16 is available on {OCP} 4.15 and later versions. + +[id="new-features-1-16_{context}"] +== New features + +In addition to fixes and stability improvements, the following sections highlight what is new in {pipelines-title} 1.16: + +[id="pipelines-new-features-1-16_{context}"] +=== Pipelines + +* With this update, you can configure the resync period for the pipelines controller. For every resync period, the controller reconciles all pipeline runs and task runs, regardless of events. The default resync period is 10 hours. If you have a large number of pipeline runs and task runs, a full reconciliation every 10 hours might consume too many resources. In this case, you can configure a longer resync period. ++ +.Example of configuring a resync period of 24 hours +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + pipeline: + options: + deployments: + tekton-pipelines-controller: + spec: + template: + spec: + containers: + - name: tekton-pipelines-controller + args: + - "-resync-period=24h" +# ... +---- + +* With this update, when defining a pipeline, you can set the `onError` parameter for a task to `continue`. If you make this setting and the task fails when the pipeline is executed, the pipeline logs the error and continues to the next task. By default, a pipeline fails if a task in it fails. ++ +.Example of setting the `onError` parameter. After the `task-that-fails` task fails, the `next-task` executes +[source,yaml] +---- +apiVersion: tekton.dev/v1 +kind: Pipeline +metadata: + name: example-onerror-pipeline +spec: + tasks: + - name: task-that-fails + onError: continue + taskSpec: + steps: + - image: alpine + name: exit-with-1 + script: | + exit 1 + - name: next-task +# ... +---- + +* With this update, if a task fails, a `finally` task can access the `reason` parameter in addition to the `status` parameter to distinguish if the failure was allowed or not. You can access the `reason` parameter through `$(tasks..reason)`. If the failure is allowed, the `reason` is set to `FailureIgnored`. If the failure is not allowed, the `reason` is set to `Failed`. This additional information can be used to identify that the checks failed, but the failure can be ignored. + +* With this update, larger results are supported through sidecar logs as an alternative to the default configuration, which limits results to the size of 4 KB per step and 12 KB per task run. To enable larger results using sidecar logs, set the `pipeline.options.configMaps.feature-flags.data.results-from` spec to `sidecar-logs` in the `TektonConfig` CR. ++ +.Example of enabling larger results using sidecar logs +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + pipeline: + options: + configMaps: + feature-flags: + data: + results-from: "sidecar-logs" +# ... +---- + +* Before this update, parameter propagation was allowed in `PipelineRun` and `TaskRun` resources, but not in the `Pipeline` resource. With this update, you can propagate `params` in the `Pipeline` resource down to the inlined pipeline `tasks` and its inlined `steps`. Wherever a resource, such as `PipelineTask` or `StepAction`, is referenced, you must pass the parameters explicitly. ++ +.Example of using `params` within a pipeline +[source,yaml] +---- +apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1 +kind: Pipeline +metadata: + name: pipeline-propagated-params +spec: + params: + - name: HELLO + default: "Hello World!" + - name: BYE + default: "Bye World!" + tasks: + - name: echo-hello + taskSpec: + steps: + - name: echo + image: ubuntu + script: | + #!/usr/bin/env bash + echo "$(params.HELLO)" + - name: echo-bye + taskSpec: + steps: + - name: echo-action + ref: + name: step-action-echo + params: + - name: msg + value: "$(params.BYE)" +# ... +---- + +* With this update, you can use the task run or pipeline run definition to configure the compute resources for steps and sidecars in a task. ++ +.Example task for configuring resources +[source,yaml] +---- +apiVersion: tekton.dev/v1 +kind: Task +metadata: + name: side-step +spec: + steps: + - name: test + image: docker.io//alpine:latest + sidecars: + - name: side + image: docker.io/linuxcontainers/alpine:latest +# ... +---- ++ +.Example `TaskRun` definition that configures the resources +[source,yaml] +---- +apiVersion: tekton.dev/v1 +kind: TaskRun +metadata: + name: test-sidestep +spec: + taskRef: + name: side-step + stepSpecs: + - name: test + computeResources: + requests: + memory: 1Gi + sidecarSpecs: + - name: side + computeResources: + requests: + cpu: 100m + limits: + cpu: 500m +# ... +---- + +[id="Operator-new-features-1-16_{context}"] +=== Operator + +* With this update, {pipelines-shortname} includes the `git-clone` `StepAction` definition for a step that clones a Git repository. Use the HTTP resolver to reference this definition. The URL for the definition is `https://raw.githubusercontent.com/openshift-pipelines/tektoncd-catalog/p/stepactions/stepaction-git-clone/0.4.1/stepaction-git-clone.yaml`. The `StepAction` definition is also installed in the `openshift-pipelines` namespace. However, the cluster resolver does not support `StepAction` definitions. ++ +.Example usage of the `git-clone` step action in a task +[source,yaml,subs="attributes+"] +---- +apiVersion: tekton.dev/v1 +kind: Task +metadata: + name: clone-repo-anon +spec: + params: + - name: url + description: The URL of the Git repository to clone + workspaces: + - name: output + description: The git repo will be cloned onto the volume backing this Workspace. + steps: + - name: clone-repo-anon-step + ref: + resolver: http + params: + - name: url + value: https://raw.githubusercontent.com/openshift-pipelines/tektoncd-catalog/p/stepactions/stepaction-git-clone/0.4.1/stepaction-git-clone.yaml + params: + - name: URL + value: $(params.url) + - name: OUTPUT_PATH + value: $(workspaces.output.path) +# ... +---- + +* With this update, the `openshift-pipelines` namespace includes versioned tasks alongside standard tasks. For example, there is a `buildah` standard task and a `buildah-1-16-0` versioned task. While the standard task might be updated in subsequent releases, the versioned task remains exactly the same as it was in a specified version, except for the correction of errors. + +* With this update, you can configure the `FailurePolicy`, `TimeoutSeconds`, and `SideEffects` options for webhooks for several components of {pipelines-shortname} by using the `TektonConfig` CR. The following example shows the configuration for the `pipeline` component. You can use similar configuration for webhooks in the `triggers`, `pipelinesAsCode`, and `hub` components. ++ +.Example configuration of webhooks options for the `pipeline` component +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + pipeline: + options: + webhookConfigurationOptions: + validation.webhook.pipeline.tekton.dev: + failurePolicy: Fail + timeoutSeconds: 20 + sideEffects: None + webhook.pipeline.tekton.dev: + failurePolicy: Fail + timeoutSeconds: 20 + sideEffects: None +# ... +---- + +[id="triggers-new-features-1-16_{context}"] +=== Triggers + +* With this update, the `readOnlyRootFilesystem` parameter for the triggers controller, webhook, Core Interceptor, and event listener is set to `true` by default to improve security and avoid being flagged by the security scanner. + +* With this update, you can configure {pipelines-shortname} triggers to run event listeners as a non-root user within the container. To set this option, set the parameters in the `TektonConfig` CR as shown in the following example: ++ +.Example of configuring trigger event listeners to run as non-root +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + trigger: + options: + disabled: false + configMaps: + config-defaults-triggers: + data: + default-run-as-non-root: "true" + default-run-as-user: "65532" + default-run-as-group: "65532" + default-fs-group: "65532" +# ... +---- ++ +Optionally, you can set the values of the `default-run-as-user` and `default-run-as-group` parameters to configure the numeric user ID and group ID for running the event listeners in containers. {pipelines-shortname} sets these values in the pod security context and container security context for event listeners. If you use empty values, the default user ID and group ID of `65532` are used. ++ +You can also set the `default-fs-group` parameter to define the `fsGroup` value for the pod security context, which is the group ID that the container processes use for the file system. If you use an empty value, the default group ID of `65532` is used. + +* With this update, in triggers, the `EventListener` pod template now includes `securityContext` settings. Under these settings, you can configure `seccompProfile`, `runAsUser`, `runAsGroup`, and `fsGroup` parameters when the `el-security-context` flag is set to `true`. + +[id="web-console-new-features-1-16_{context}"] +=== Web console + +* Before this release, when using the web console, you could not see the timestamp for the logs that {pipelines-shortname} created. With this update, the web console includes timestamps for all {pipelines-shortname} logs. + +* With this update, the pipeline run and task run *list* pages in the web console now have a filter for the data source, such as `k8s` and `TektonResults API`. + +* Before this update, when using the web console in the *Developer* perspective, you could not specify the timeout for pipeline runs. With this update, you can set a timeout while starting the pipeline run in the *Developer* perspective of the web console. + +* Before this update, the *Overview* pipeline dashboard only appeared when {tekton-results} was enabled. All the statistics came from only the Results API. With this update, the *Overview* pipeline dashboard is available regardless of whether {tekton-results} is enabled or not. When {tekton-results} is disabled, you can use the dashboard to see the statistics for objects in the cluster. + +* With this update, the sample pipelines displayed in the web console use the `v1` version of the {pipelines-shortname} API. + +[id="cli-new-features-1-16_{context}"] +=== CLI + +* With this update, you can use the `tkn customrun delete ` command to delete one or more custom runs. + +* With this update, when you run a `tkn list` command with the `-o` YAML flag, the listed resources are now separated with `---` separators to enhance readability of the output. + +[id="pac-new-features-1-16_{context}"] +=== {pac} + +* With this update, if you create two `PipelineRun` definitions with the same name, {pac} logs an error and does not run either of these pipeline runs. + +* With this update, the {pac} `pipelines_as_code_pipelinerun_count` metric allows filtering of the `PipelineRun` count by repository or namespace. + +* With this update, the `readOnlyRootFilesystem` security context for the {pac} controller, webhook, and watcher is set to `true` by default to increase security and avoid being flagged by the security scanner. + +[id="tekton-chains-new-features-1-16_{context}"] +=== {tekton-chains} + +* With this update, when using `docdb` storage in {tekton-chains}, you can configure the `MONGO_SERVER_URL` value directly in the `TektonConfig` CR as the `storage.docdb.mongo-server-url` setting. Alternatively, you can provide this value using a secret and set the `storage.docdb.mongo-server-url-dir` setting to the directory where the `MONGO_SERVER_URL` file is located. ++ +.Example of creating a secret with the `MONGO_SERVER_URL` value +[source,terminal] +---- +$ oc create secret generic mongo-url -n tekton-chains \ # + --from-file=MONGO_SERVER_URL=/home/user/MONGO_SERVER_URL +---- ++ +.Example of configuring the `MONGO_SERVER_URL` value using a secret +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + chain: + disabled: false + storage.docdb.mongo-server-url-dir: /tmp/mongo-url + options: + deployments: + tekton-chains-controller: + spec: + template: + spec: + containers: + - name: tekton-chains-controller + volumeMounts: + - mountPath: /tmp/mongo-url + name: mongo-url + volumes: + - name: mongo-url + secret: + secretName: mongo-url +# ... +---- + +* With this update, when using KMS signing in {tekton-chains}, instead of providing the KMS authentication token value directly in the configuration, you can provide the token value as a secret by using the `signers.kms.auth.token-path` setting. ++ +To create a KMS token secret, enter the following command: ++ +[source,terminal] +---- +$ oc create secret generic -n tekton-chains \ + --from-file=KMS_AUTH_TOKEN=/home/user/KMS_AUTH_TOKEN <1> +---- +<1> Replace `` with any name. The following example uses a KMS secret called `kms-secrets`. ++ +.Example configuration of the KMS token value using a secret called `kms-secrets` +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + chain: + disabled: false + signers.kms.auth.token-path: /etc/kms-secrets/KMS_AUTH_TOKEN + options: + deployments: + tekton-chains-controller: + spec: + template: + spec: + containers: + - name: tekton-chains-controller + volumeMounts: + - mountPath: /etc/kms-secrets + name: kms-secrets + volumes: + - name: kms-secrets + secret: + secretName: kms-secrets +# ... +---- + +* With this update, you can configure a list of namespaces as an argument to the {tekton-chains} controller. If you provide this list, {tekton-chains} watches pipeline runs and task runs only in the specified namespaces. If you do not provide this list, {tekton-chains} watches pipeline runs and task runs in all namespaces. ++ +.Example configuration for watching only the `dev` and `test` namespaces +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonConfig +metadata: + name: config +spec: + chain: + disabled: false + options: + deployments: + tekton-chains-controller: + spec: + template: + spec: + containers: + - args: + - --namespace=dev, test + name: tekton-chains-controller +# ... +---- + +[id="tekton-results-new-features-1-16_{context}"] +=== {tekton-results} + +* Before this update, {tekton-results} used the `v1beta1` API format to store `TaskRun` and `PipelineRun` object records. With this update, {tekton-results} uses the `v1` API format to store `TaskRun` and `PipelineRun` object records. + +* With this update, {tekton-results} can automatically convert existing records to the `v1` API format. To enable such conversion, set parameters in the `TektonResult` CR as shown in the following example: ++ +.Example of configuring {tekton-results} to convert existing records to the `v1` API format +[source,yaml] +---- + apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonResult +metadata: + name: result +spec: + options: + deployments: + tekton-results-api: + spec: + template: + spec: + containers: + - name: api + env: + - name: CONVERTER_ENABLE + value: "true" + - name: CONVERTER_DB_LIMIT + value: "256" #<1> +# ... +---- +<1> In the `CONVERTER_DB_LIMIT` variable, set the number of records to convert at the same time in a single transaction. + +* With this update, {tekton-results} now supports fetching forwarded logs from third party logging APIs. You can enable the logging API through the `TektonResult` CR by setting the `logs_api` to `true` and `logs_type` to `Loki`. + +* With this update, you can configure automatic pruning of the {tekton-results} database. You can specify the number of days for which records must be stored. You can also specify the schedule for running the pruner job that removes older records. To set these parameters, edit the `TektonResult` CR, as shown in the following example: ++ +.Example of configuring automatic pruning of the {tekton-results} database +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonResult +metadata: + name: result +spec: + options: + configMaps: + config-results-retention-policy: + data: + runAt: "3 5 * * 0" #<1> + maxRetention: "30" #<2> +# ... +---- +<1> Specify, in the cron format, when to run the pruning job in the database. This example runs the job at 5:03 AM every Sunday. +<2> Specify the number of days to keep the data in the database. This example retains the data for 30 days. + +* With this update, you can configure {tekton-results} to store event logs for pipelines and tasks. To enable storage of event logs, edit the `TektonResult` CR, as shown in the following example: ++ +.Example of configuring automatic pruning of the {tekton-results} database +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonResult +metadata: + name: result +spec: + options: + deployments: + tekton-results-watcher: + spec: + template: + spec: + containers: + - name: watcher + args: + - "--store_event=true" +# ... +---- + +* With this update, you can configure {tekton-results} to use the {OCP} Cluster Log Forwarder to store all log data in a LokiStack instance, instead of placing it directly on a storage volume. This option enables scaling to a higher rate of pipeline runs and task runs. ++ +To configure {tekton-results} to use the {OCP} Cluster Log Forwarder to store all log data in a LokiStack instance, you must deploy LokiStack in your cluster by using the Loki Operator and also install the OpenShift Logging Operator. Then you must create a `ClusterLogForwarder` CR in the `openshift-logging` namespace by using one of the following YAML manifests: ++ +.YAML manifest for the `ClusterLogForwarder` CR if you installed OpenShift Logging version 6 +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: collector + namespace: openshift-logging +spec: + inputs: + - application: + selector: + matchLabels: + app.kubernetes.io/managed-by: tekton-pipelines + name: only-tekton + type: application + managementState: Managed + outputs: + - lokiStack: + labelKeys: + application: + ignoreGlobal: true + labelKeys: + - log_type + - kubernetes.namespace_name + - openshift_cluster_id + authentication: + token: + from: serviceAccount + target: + name: logging-loki + namespace: openshift-logging + name: default-lokistack + tls: + ca: + configMapName: openshift-service-ca.crt + key: service-ca.crt + type: lokiStack + pipelines: + - inputRefs: + - only-tekton + name: default-logstore + outputRefs: + - default-lokistack + serviceAccount: + name: collector +# ... +---- ++ +.YAML manifest for the `ClusterLogForwarder` CR if you installed OpenShift Logging version 5 +[source,yaml] +---- +apiVersion: "logging.openshift.io/v1" +kind: ClusterLogForwarder +metadata: + name: instance + namespace: openshift-logging +spec: + inputs: + - name: only-tekton + application: + selector: + matchLabels: + app.kubernetes.io/managed-by: tekton-pipelines + pipelines: + - name: enable-default-log-store + inputRefs: [ only-tekton ] + outputRefs: [ default ] +# ... +---- ++ +Finally, in the `TektonResult` CR in the `openshift-pipelines` namespace, set the following additional parameters: ++ +-- +* `loki_stack_name`: The name of the `LokiStack` CR, typically `logging-loki`. +* `loki_stack_namespace`: The name of the namespace where LokiStack is deployed, typically `openshift-logging`. +-- ++ +.Example of configuring LokiStack log forwarding in the `TektonResult` CR +[source,yaml] +---- +apiVersion: operator.tekton.dev/v1alpha1 +kind: TektonResult +metadata: + name: result +spec: + targetNamespace: tekton-pipelines +# ... + loki_stack_name: logging-loki + loki_stack_namespace: openshift-logging +# ... +---- + +[id="breaking-changes-1-16_{context}"] +== Breaking changes + +* With this update, the metric name for the `EventListener` object for pipelines triggers that counts received events is changed from `eventlistener_event_count` to `eventlistener_event_received_count`. + +[id="known-issues-1-16_{context}"] +== Known issues + +* The `jib-maven` `ClusterTask` does not work if you are using {OCP} version 4.16 and later. + +[id="fixed-issues-1-16_{context}"] +== Fixed issues + +* Before this update, when you uninstalled {tekton-hub} by deleting the `TektonHub` CR, the pod of the `hub-db-migration` job was not deleted. With this update, uninstalling {tekton-hub} deletes the pod. + +* Before this update, when you used {tekton-results} to store pod logs from pipelines and tasks, the operation to store the logs sometimes failed. The logs would include the `UpdateLog` action failing with the `canceled context` error. With this update, the operation completes correctly. + +* Before this update, when you passed a parameter value to a pipeline or task and the value included more than one variable with both full and short reference formats, for example, `$(tasks.task-name.results.variable1) + $(variable2)`, {pipelines-shortname} did not interpret the value correctly. The pipeline run or task run could stop execution and the Pipelines controller could crash. With this update, {pipelines-shortname} interprets the value correctly and the pipeline run or task run completes. + +* Before this update, {tekton-chains} failed to generate correct attestations when a task run included multiple tasks with the same name. For instance, when using a matrix of tasks, the attestation was generated for the first image. With this update, {tekton-chains} generates attestations for all tasks within the task run, ensuring complete coverage. + +* Before this update, when you used the `skopeo-copy` task defined in the {pipelines-shortname} installation namespace and set its `VERBOSE` parameter to `false`, the task failed. With this update, the task completes normally. + +* Before this update, when using {pac}, if you set the `concurrency_limit` spec in the global `Repository` CR named `pipelines-as-code` in the `openshift-pipelines` or `pipelines-as-code` namespace, which provides default settings for all `Repository` CRs, the {pac} watcher crashed. With this update, the {pac} watcher operates correctly with this setting. + +* Before this update, all tasks in {pipelines-shortname} included an extra step compared to the cluster tasks of the same name that were available in previous versions of {pipelines-shortname}. This extra step increased the load on the cluster. With this update, the tasks no longer include the extra step as it is integrated into the first step. + +* Before this update, when you used one of the `s2i-*` tasks defined in the {pipelines-shortname} installation namespace and set its `CONTEXT` parameter, the task did not interpret the parameter correctly and the task failed. With this update, the task interprets the `CONTEXT` parameter correctly and completes successfully. + +* Before this update, in {tekton-chains} the in-toto provenance metadata, `URI` and `Digest` values, were incomplete. The values contained only the information of remote `Pipeline` and `Task` resources, but were missing the information of the remote `StepAction` resource. With this update, the provenance of the remote `StepAction` resource is recorded in the task run status and inserted into the in-toto provenance, which results in complete in-toto provenance metadata. + +* Before this update, you could modify some of the parameters in the `spec` field of the `PipelineRun` and `TaskRun` resources that should not be modifiable after the resources were created. With this update, you can only modify the allowed fields after the pipeline run and task run are created, such as `status` and `statusMessage` fields. + +* Before this update, if a step action parameter was an `array` type but a `string` value was passed in a task, there was no error indicating inconsistent parameter types and the default parameter value was used instead. With this update, an error is added to indicate the inconsistent values: `invalid parameter substitution: %s. Please check the types of the default value and the passed value`. + +* Before this update, task runs and pipeline runs were deleted by an external pruner when logs were streamed through the watcher. With this update, a finalizer is added to {tekton-results} for `TaskRun` and `PipelineRun` objects to ensure that the runs are stored and not deleted. The runs are stored either as records or until the deadline has passed, which is calculated as the completion time plus the `store_deadline` time. The finalizer does not prevent deletion if legacy log streaming from the watcher or pruner is enabled. + +* Before this update, the web console supported the `v1beta1` API format to display the `TaskRun` and `PipelineRun` object records that are stored in {tekton-results}. With this update, the console supports the `v1` API format to display `TaskRun` and `PipelineRun` object records stored in {tekton-results}. + +* Before this update, when using {pac}, if different `PipelineRun` definitions used the same task name but different versions, for example when fetching tasks from {tekton-hub},the wrong version was sometimes triggered, because {pac} used the same task version for all pipeline runs. With this update, {pac} triggers the correct version of the referenced task. + +* Before this update, when you used a resolver to reference remote pipelines or tasks, transient communication errors caused immediate failure retrieving those remote references. With this update, the resolver requeues the retrieval and eventually retries the retrieval. + +* Before this update, {tekton-results} could use an increasing amount of memory when storing log information for pipeline runs and task runs. This update fixes the memory leak and {tekton-results} uses a normal amount of memory. + +* Before this update, when using {pac}, if your `.tekton` directory contained a pipeline that was not referenced by any `PipelineRun` definition triggered in the event, {pac} attempted to fetch all the required tasks for that pipeline even though it was not run. With this update, {pac} does not try to resolve pipelines that are not referenced in any pipeline run triggered by the current event. \ No newline at end of file diff --git a/modules/op-tkn-pipelines-compatibility-support-matrix.adoc b/modules/op-tkn-pipelines-compatibility-support-matrix.adoc index b12157355ccc..e8905cb3880d 100644 --- a/modules/op-tkn-pipelines-compatibility-support-matrix.adoc +++ b/modules/op-tkn-pipelines-compatibility-support-matrix.adoc @@ -1,3 +1,7 @@ +// This module is included in the following assemblies: +// * release_notes/op-release-notes-1-16.adoc + +:_mod-docs-content-type: REFERENCE [id="compatibility-support-matrix_{context}"] = Compatibility and support matrix @@ -19,18 +23,12 @@ GA:: General Availability | Operator | Pipelines | Triggers | CLI | Chains | Hub | {pac} | Results | Manual Approval Gate | | +|1.16 | 0.62.x | 0.29.x | 0.38.x | 0.22.x (GA) | 1.18.x (TP) | 0.28.x (GA) | 0.12.x (TP) | 0.3.x (TP) | 4.15, 4.16, 4.17 | GA + |1.15 | 0.59.x | 0.27.x | 0.37.x | 0.20.x (GA) | 1.17.x (TP) | 0.27.x (GA) | 0.10.x (TP) | 0.2.x (TP) | 4.14, 4.15, 4.16 | GA |1.14 | 0.56.x | 0.26.x | 0.35.x | 0.20.x (GA) | 1.16.x (TP) | 0.24.x (GA) | 0.9.x (TP) | NA | 4.12, 4.13, 4.14, 4.15, 4.16 | GA -|1.13 | 0.53.x | 0.25.x | 0.33.x | 0.19.x (GA) | 1.15.x (TP) | 0.22.x (GA) | 0.8.x (TP) | NA | 4.12, 4.13, 4.14, 4.15 | GA - -|1.12 | 0.50.x | 0.25.x | 0.32.x | 0.17.x (GA) | 1.14.x (TP) | 0.21.x (GA) | 0.8.x (TP) | NA | 4.12, 4.13, 4.14 | GA - -|1.11 | 0.47.x | 0.24.x | 0.31.x | 0.16.x (GA) | 1.13.x (TP) | 0.19.x (GA) | 0.6.x (TP) | NA | 4.12, 4.13, 4.14 | GA - -|1.10 | 0.44.x | 0.23.x | 0.30.x | 0.15.x (TP) | 1.12.x (TP) | 0.17.x (GA) | NA | NA | 4.10, 4.11, 4.12, 4.13 | GA - |=== For questions and feedback, you can send an email to the product team at pipelines-interest@redhat.com. diff --git a/release_notes/op-release-notes-1-16.adoc b/release_notes/op-release-notes-1-16.adoc index d6430477bfa8..ee493299507c 100644 --- a/release_notes/op-release-notes-1-16.adoc +++ b/release_notes/op-release-notes-1-16.adoc @@ -32,5 +32,5 @@ include::modules/op-tkn-pipelines-compatibility-support-matrix.adoc[leveloffset= include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1] // Release notes for Red Hat OpenShift Pipelines 1.16.0 -// TBD +include::modules/op-release-notes-1-16.adoc[leveloffset=+1]