Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions _attributes/common-attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
:pipelines-shortname: OpenShift Pipelines
:pipelines-ver: pipelines-1.11
:tekton-chains: Tekton Chains
:tekton-results: Tekton Results
:tekton-hub: Tekton Hub
:artifact-hub: Artifact Hub
:pac: Pipelines as Code
Expand Down
2 changes: 2 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,8 @@ Name: Observability in OpenShift Pipelines
Dir: records
Distros: openshift-pipelines
Topics:
- Name: Using Tekton Results for OpenShift Pipelines observability
File: using-tekton-results-for-openshift-pipelines-observability
- Name: Viewing pipeline logs using the OpenShift Logging Operator
File: viewing-pipeline-logs-using-the-openshift-logging-operator
---
Expand Down
94 changes: 94 additions & 0 deletions modules/op-installing-results.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
// This module is included in the following assembly:
//
// * cicd/pipelines/using-tekton-results-for-openshift-pipelines-observability.adoc

:_content-type: PROCEDURE
[id="installing-results_{context}"]
= Installing {tekton-results}

[role="_abstract"]
To install {tekton-results}, you must provide the required resources and then create and apply a `TektonResult` custom resource (CR). The {pipelines-shortname} Operator installs the Results services when you apply the `TektonResult` custom resource.

.Prerequisites

* You installed {pipelines-shortname} using the Operator.
* You prepared a secret with the SSL certificate.
* You prepared storage for the logging information.
* You prepared a secret with the database credentials.

.Procedure

. Create the resource definition file named `result.yaml` based on the following example. You can adjust the settings as necessary.
+
[source,yaml]
----
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonResult
metadata:
name: result
spec:
targetNamespace: openshift-pipelines
logs_api: true
log_level: debug
db_port: 5432
db_host: tekton-results-postgres-service.openshift-pipelines.svc.cluster.local
logs_path: /logs
logs_type: File
logs_buffer_size: 32768
auth_disable: true
tls_hostname_override: tekton-results-api-service.openshift-pipelines.svc.cluster.local
db_enable_auto_migration: true
server_port: 8080
prometheus_port: 9090
----

. Add configuration for the storage for logging information to this file:
** If you configured a persistent volume claim (PVC), add the following line to provide the name of the PVC:
+
[source,yaml]
----
logging_pvc_name: tekton-logs
----
** If you configured Google Cloud Storage, add the following lines to provide the secret name, the credentials file name, and the name of the Google Cloud Storage bucket:
+
[source,yaml]
----
gcs_creds_secret_name: gcs-credentials
gcs_creds_secret_key: application_default_credentials.json # <1>
gcs_bucket_name: bucket-name #<2>
----
<1> Provide the name, without the path, of the application credentials file that you used when creating the secret.
<2> Provide the name of a bucket in Google Cloud Storage. {tekton-chains} uses this bucket to store logging information for pipeline runs and task runs.
** If you configured S3 bucket storage, add the following line to provide the name of the S3 secret:
+
[source,yaml]
----
secret_name: s3-credentials
----

. Optional: If you want to use an external PostgreSQL database server to store {tekton-results} information, add the following lines to the file:
+
[source,yaml]
----
db_host: postgres.internal.example.com # <1>
db_port: 5432 # <2>
is_external_db: true
----
<1> The host name for the PostgreSQL server.
<2> The port for the PostgreSQL server.

. Apply the resource definition by entering the following command:
+
[source,terminal]
----
$ oc apply -n openshift-pipelines -f result.yaml
----

. Expose the route for the {tekton-results} service API by entering the following command:
+
[source,terminal]
----
$ oc create route -n openshift-pipelines \
passthrough tekton-results-api-service \
--service=tekton-results-api-service --port=8080
----
54 changes: 54 additions & 0 deletions modules/op-prepare-opc-for-results.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
// This module is included in the following assembly:
//
// * records/using-tekton-results-for-openshift-pipelines-observability.adoc

:_content-type: PROCEDURE
[id="prepare-opc-for-results_{context}"]
= Preparing the opc utility environment for querying {tekton-results}

[role="_abstract"]
Before you can query {tekton-results}, you must prepare the environment for the `opc` utility.

.Prerequisites

* You installed {tekton-results}.
* You installed the `opc` utility.

.Procedure

. Set the `RESULTS_API` environment variable to the route to the {tekton-results} API by entering the following command:
+
[source, terminal]
----
$ export RESULTS_API=$(oc get route tekton-results-api-service -n openshift-pipelines --no-headers -o custom-columns=":spec.host"):443
----

. Create an authentication token for the {tekton-results} API by entering the following command:
+
[source,terminal]
----
$ oc create token sa <service_account>
----
+
Save the string that this command outputs.

. Optional: Create the `~/.config/tkn/results.yaml` file for automatic authentication with the {tekton-results} API. The file must have the following contents:
+
[source,yaml]
----
address: <tekton_results_route> # <1>
token: <authentication_token> # <2>
ssl:
roots_file_path: /home/example/cert.pem # <3>
server_name_override: tekton-results-api-service.openshift-pipelines.svc.cluster.local # <4>
service_account:
namespace: service_acc_1 <5>
name: service_acc_1 <5>
----
<1> The route to the {tekton-results} API. Use the same value as you set for `RESULTS_API`.
<2> The authentication token that was created by the `oc create token` command. If you provide this token, it overrides the `service_account` setting and `opc` uses this token to authenticate.
<3> The location of the file with the SSL certificate that you configured for the API endpoint.
<4> If you configured a custom target namespace for {pipelines-shortname}, replace `openshift-pipelines` with the name of this namespace.
<5> The name of a service account for authenticating with the {tekton-results} API. If you provided the authentication token, you do not need to provide the `service_account` parameters.
+
Alternatively, if you do not create the `~/.config/tkn/results.yaml` file, you can pass the token to each `opc` command by using the `--authtoken` option.
98 changes: 98 additions & 0 deletions modules/op-query-results-name.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
// This module is included in the following assembly:
//
// * records/using-tekton-results-for-openshift-pipelines-observability.adoc

:_content-type: PROCEDURE
[id="query-results-uuid_{context}"]
= Querying for results and records by name

[role="_abstract"]
You can list and query results and records using their names.

.Prerequisites

* You installed {tekton-results}.
* You installed the `opc` utility and prepared its environment to query {tekton-results}.
* You installed the `jq` package.

.Procedure

. List the names of all results that correspond to pipeline runs and task runs created in a namespace. Enter the following command:
+
[source,terminal]
----
$ opc results list --addr ${RESULTS_API} <namespace_name>
----
+
.Example command
[source,terminal]
----
$ opc results list --addr ${RESULTS_API} results-testing
----
+
.Example output
[source,text]
----
Name Start Update
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed 2023-06-29 02:49:53 +0530 IST 2023-06-29 02:50:05 +0530 IST
results-testing/results/ad7eb937-90cc-4510-8380-defe51ad793f 2023-06-29 02:49:38 +0530 IST 2023-06-29 02:50:06 +0530 IST
results-testing/results/d064ce6e-d851-4b4e-8db4-7605a23671e4 2023-06-29 02:49:45 +0530 IST 2023-06-29 02:49:56 +0530 IST
----

. List the names of all records in a result by entering the following command:
+
[source,terminal]
----
$ opc results records list --addr ${RESULTS_API} <result_name>
----
+
.Example command
[source,terminal]
----
$ opc results records list --addr ${RESULTS_API} results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed
----
+
.Example output
[source,text]
----
Name Type Start Update
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e9c736db-5665-441f-922f-7c1d65c9d621 tekton.dev/v1beta1.TaskRun 2023-06-29 02:49:53 +0530 IST 2023-06-29 02:49:57 +0530 IST
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/5de23a76-a12b-3a72-8a6a-4f15a3110a3e results.tekton.dev/v1alpha2.Log 2023-06-29 02:49:57 +0530 IST 2023-06-29 02:49:57 +0530 IST
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/57ce92f9-9bf8-3a0a-aefb-dc20c3e2862d results.tekton.dev/v1alpha2.Log 2023-06-29 02:50:05 +0530 IST 2023-06-29 02:50:05 +0530 IST
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e9a0c21a-f826-42ab-a9d7-a03bcefed4fd tekton.dev/v1beta1.TaskRun 2023-06-29 02:49:57 +0530 IST 2023-06-29 02:50:05 +0530 IST
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/04e2fbf2-8653-405f-bc42-a262bcf02bed tekton.dev/v1beta1.PipelineRun 2023-06-29 02:49:53 +0530 IST 2023-06-29 02:50:05 +0530 IST
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e6eea2f9-ec80-388c-9982-74a018a548e4 results.tekton.dev/v1alpha2.Log 2023-06-29 02:50:05 +0530 IST 2023-06-29 02:50:05 +0530 IST
----

. Retrieve the YAML manifest for a pipeline run or task run from a record by entering the following command:
+
[source,terminal]
----
$ opc results records get --addr ${RESULTS_API} <record_name> \
| jq -r .data.value | base64 -d | \
xargs -0 python3 -c 'import sys, yaml, json; j=json.loads(sys.argv[1]); print(yaml.safe_dump(j))'
----
+
.Example command
[source,terminal]
----
$ opc results records get --addr ${RESULTS_API} \
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/records/e9c736db-5665-441f-922f-7c1d65c9d621 | \
jq -r .data.value | base64 -d | \
xargs -0 python3 -c 'import sys, yaml, json; j=json.loads(sys.argv[1]); print(yaml.safe_dump(j))'
----

. Optional: Retrieve the logging information for a task run from a record using the log record name. To get the log record name, replace `records` with `logs` in the record name. Enter the following command:
+
[source,terminal]
----
$ opc results logs get --addr ${RESULTS_API} <log_record_name> | jq -r .data | base64 -d
----
+
.Example command
[source,terminal]
----
$ opc results logs get --addr ${RESULTS_API} \
results-testing/results/04e2fbf2-8653-405f-bc42-a262bcf02bed/logs/e9c736db-5665-441f-922f-7c1d65c9d621 | \
jq -r .data | base64 -d
----
40 changes: 40 additions & 0 deletions modules/op-results-cert.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
// This module is included in the following assembly:
//
// * records/using-tekton-results-for-openshift-pipelines-observability.adoc

:_content-type: PROCEDURE
[id="results-cert_{context}"]
= Preparing a secret with an SSL certificate

{tekton-results} provides a REST API using the HTTPS protocol, which requires an SSL certificate. Provide a secret with this certificate. If you have an existing certificate provided by a certificate authority (CA), use this certificate, otherwise create a self-signed certificate.

.Prerequisites

* The `openssl` command-line utility is installed.

.Procedure

. If you do not have a certificate provided by a CA, create a self-signed certificate by entering the following command:
+
[source,terminal]
----
$ openssl req -x509 \
-newkey rsa:4096 \
-keyout key.pem \
-out cert.pem \
-days 365 \
-nodes \
-subj "/CN=tekton-results-api-service.openshift-pipelines.svc.cluster.local" \
-addext "subjectAltName = DNS:tekton-results-api-service.openshift-pipelines.svc.cluster.local"
----
+
Replace `tekton-results-api-service.openshift-pipelines.svc.cluster.local` with the route endpoint that you plan to use for the {tekton-results} API.

. Create a transport security layer (TLS) secret from the certificate by entering the following command:
+
[source,terminal]
----
$ oc create secret tls -n openshift-pipelines tekton-results-tls --cert=cert.pem --key=key.pem
----
+
If you want to use an existing certificate provided by a CA, replace `cert.pem` with the name of the file containing this certificate.
Loading