From 434d57484ce643f476c2a2fd1386743dd2e138b7 Mon Sep 17 00:00:00 2001 From: Sunil Shetty Date: Mon, 18 Jul 2022 19:40:36 +0530 Subject: [PATCH] Updating Readme and RN Signed-off-by: Sunil Shetty --- aws/README.md | 8 +- .../AWSNonAirgap-DeploymentGuide.md | 10 +- docs/product/release/WhatsNew.md | 10 +- docs/product/tekton/README.md | 264 ++++++++++-------- tekton/README.md | 264 ++++++++++-------- 5 files changed, 305 insertions(+), 251 deletions(-) diff --git a/aws/README.md b/aws/README.md index e32cd111..a763acf3 100644 --- a/aws/README.md +++ b/aws/README.md @@ -4,7 +4,9 @@ --> # README -Service Installer for VMware Tanzu (SIVT) enables users to install Tanzu for Kubernetes Operations on the AWS environment. This project helps build an architecture in AWS that corresponds to [Tanzu for Kubernetes Operations Reference Design](https://docs.vmware.com/en/VMware-Tanzu/services/tanzu-reference-architecture/GUID-reference-designs-tko-on-aws.html). +Service Installer for VMware Tanzu (SIVT) enables users to install Tanzu for Kubernetes Operations on the AWS environment. This project helps build an architecture in AWS that corresponds to following reference architecture +- Non-Airgap: [Tanzu for Kubernetes Operations Reference Design](https://docs.vmware.com/en/VMware-Tanzu/services/tanzu-reference-architecture/GUID-reference-designs-tko-on-aws.html). +- Airgap: RA is yet to be published Service Installer for VMware Tanzu provides automation of Tanzu Kubernetes Grid deployment on the following two AWS environments: @@ -26,7 +28,7 @@ Service Installer for VMware Tanzu deploys: For air-gapped deployment, Service Installer for VMware Tanzu supports Ubuntu and vanilla Amazon Linux 2 based cluster nodes. -For detailed information, see Tanzu Kubernetes Grid on [Federal Air-gapped AWS Deployment Guide](./docs/product/release/AWS%20-%20Federal%20Airgap/AWSFederalAirgap-DeploymentGuide.md). +For detailed information, see Tanzu Kubernetes Grid on [Federal Air-gapped AWS Deployment Guide](../docs/product/release/AWS%20-%20Federal%20Airgap/AWSFederalAirgap-DeploymentGuide.md). ## Tanzu for Kubernetes Operations Deployment on Non Air-Gapped AWS @@ -52,4 +54,4 @@ Compliant deployment enables users to deploy Tanzu Kubernetes Grid according to This Tanzu Kubernetes Grid deployment process makes use of vanilla Tanzu Kubernetes Grid images for installation and deploys non-FIPS and non-STIG hardened Tanzu Kubernetes Grid master and worker nodes. -For detailed information, see Tanzu Kubernetes Grid on [Non Air-gapped AWS Deployment Guide](./docs/product/release/AWS%20-%20Non%20Airgap/AWSNonAirgap-DeploymentGuide.md). +For detailed information, see Tanzu Kubernetes Grid on [Non Air-gapped AWS Deployment Guide](../docs/product/release/AWS%20-%20Non%20Airgap/AWSNonAirgap-DeploymentGuide.md). diff --git a/docs/product/release/AWS - Non Airgap/AWSNonAirgap-DeploymentGuide.md b/docs/product/release/AWS - Non Airgap/AWSNonAirgap-DeploymentGuide.md index dc5e5346..6f32ae1e 100644 --- a/docs/product/release/AWS - Non Airgap/AWSNonAirgap-DeploymentGuide.md +++ b/docs/product/release/AWS - Non Airgap/AWSNonAirgap-DeploymentGuide.md @@ -164,7 +164,15 @@ These prerequisites are applicable only if you use manually pre-created VPC for ``` 1. Specify the deployment type. - **Compliant deployment:** By default, Service Installer for VMware Tanzu deploys FIPS compliant Tanzu Kubernetes Grid master and worker nodes. In this type of deployment, Service Installer for VMware Tanzu makes use of FIPS compliant and STIG hardened Ubuntu (18.04) base OS for Tanzu Kubernetes Grid cluster nodes, FIPS enabled Kubernetes overlay, and FIPS compliant Tanzu Kubernetes Grid images. + **Compliant deployment:** By default, Service Installer for VMware Tanzu deploys FIPS compliant Tanzu Kubernetes Grid master and worker nodes. In this type of deployment, Service Installer for VMware Tanzu makes use of FIPS compliant and STIG hardened Ubuntu (18.04) base OS for Tanzu Kubernetes Grid cluster nodes, FIPS enabled Kubernetes overlay, and FIPS compliant Tanzu Kubernetes Grid images. To perform compliant deployment, perform following steps + - For doing FIPS compliance deployment on ubuntu, automation needs ubuntu advantage username and password. Export these using following commands + ``` + export UBUNTU_ADVANTAGE_PASSWORD= + export UBUNTU_ADVANTAGE_PASSWORD_UPDATES= + ``` + + + - If ubuntu advantage username and password are not available, then disable FIPS enablement for ubuntu by setting `install_fips` variable to `no` in file `/deployment_binaries/sivt-aws-federal/ami/stig/roles/canonical-ubuntu-18.04-lts-stig-hardening/vars/main.yml`. This will disable FIPS at OS level. **Non-compliant deployment:** If you are looking for deployment with vanilla Tanzu Kubernetes Grid master and worker nodes, set the `COMPLIANT_DEPLOYMENT` variable to `false` by running the following command on your Jumpbox VM. Once this variable is set, Service Installer for VMware Tanzu makes use of vanilla Tanzu Kubernetes Grid images for installation. diff --git a/docs/product/release/WhatsNew.md b/docs/product/release/WhatsNew.md index ec953c9a..2f141552 100644 --- a/docs/product/release/WhatsNew.md +++ b/docs/product/release/WhatsNew.md @@ -45,6 +45,14 @@ - Service Installer for VMware Tanzu UI allows to skip shared services cluster and workload cluster deployments - Support for updated packages for Photon 3.0 operating system +### Tekton Enhancements + +- Bringup of reference architecture based Tanzu Kubernetes Grid environment on vSphere TKGM 1.5.3 and 1.5.4 +- E2E Tanzu Kubernetes Grid deployment and configuration of AVI controller, management, shared services, and workload clusters, plugins, extensions +- Rescale and resize Day-2 operations +- Upcoming support of Tanzu Kubernetes Grid Service E2E deployment* +- Upcoming support of Tanzu Kubernetes Grid Day-2 Upgrade support from 1.5.x to 1.5.4 with packages and extensions* + ## Resolved Issues - Datacenter, cluster, and datastores now support sub-folder structure. @@ -77,4 +85,4 @@ ## Download -- Download the Service Installer OVA and AWS solutions from [VMware Marketplace: Service Installer for VMware Tanzu](https://marketplace.cloud.vmware.com/services/details/service-installer-for-vmware-tanzu-1?slug=true). \ No newline at end of file +- Download the Service Installer OVA and AWS solutions from [VMware Marketplace: Service Installer for VMware Tanzu](https://marketplace.cloud.vmware.com/services/details/service-installer-for-vmware-tanzu-1?slug=true). diff --git a/docs/product/tekton/README.md b/docs/product/tekton/README.md index 47727a2c..210bf0dc 100644 --- a/docs/product/tekton/README.md +++ b/docs/product/tekton/README.md @@ -1,158 +1,176 @@ -**TEKTON PIPELINE FOR TKGM** +## Tekton Pipeline for Tanzu Kubernetes Grid -Tekton is a cloud-native solution for building CI/CD systems. SIVT is bundled with Tekton capability which provides the pipelines for DayO deployment and Day2 operations of TKGM 1.5.4 on vSphere backed environment. +Tekton is a cloud-native solution for building CI/CD systems. Service Installer for VMware Tanzu (SIVT) is bundled with Tekton capability which provides the pipelines for Day-O deployment and Day2 operations of Tanzu Kubernetes Grid 1.5.4 on a vSphere backed environment. -**Features** +## Features -- Bringup of Reference Architecture based TKGM environment on vSphere. -- E2E TKGM deployment and configuration of AVI Controller, Management, SharedServices, Workload clusters, Plugins, Extensions -- Rescale and Resize Day2 operations. -- Upcoming support of TKGS E2E deployment* -- Upcoming support of TKGM DAY2 Upgrade support from 1.5.x to 1.5.4 with packages and extensions* +- Bringup of reference architecture based Tanzu Kubernetes Grid environment on vSphere +- E2E Tanzu Kubernetes Grid deployment and configuration of AVI controller, management, shared services, and workload clusters, plugins, extensions +- Rescale and resize Day-2 operations +- Upcoming support of Tanzu Kubernetes Grid Service E2E deployment* +- Upcoming support of Tanzu Kubernetes Grid Day-2 Upgrade support from 1.5.x to 1.5.4 with packages and extensions* -**Pre-requisites** +## Prerequisites -Tekton pipelines execution require the following: +Tekton pipeline execution requires the following: -- Service Installer OVA. Download from Marketplace https://marketplace.cloud.vmware.com/services/details/service-installer-for-vmware-tanzu-1?slug=true +- Service Installer for VMware Tanzu (SIVT) OVA. Download from Marketplace https://marketplace.cloud.vmware.com/services/details/service-installer-for-vmware-tanzu-1?slug=true - Docker login - Service Installer Tekton Docker tar file. service_installer_tekton_v154.tar - Private git repo -**Execution** - - 1. GIT Preparation - ``` - 1. Prepare a private git (gitlab/github) repo - 2. Clone the code from https://github.com/vmware-tanzu/service-installer-for-vmware-tanzu/tree/1.3-1.5.4/tekton - 3. Create a GIT PAT and copy the token for later stages - 4. Prepare deployment.json based on your environment. Refer SIVT Readme for preparing the config file. - 5. Commit the file under config/deployment.json in the git repo. - ``` - - 2. SIVT OVA Preparation - ``` - 1. Deploy the download SIVT OVA and power on the VM - 2. Login to the SIVT OVA - 3. Clone your private repository by - #git clone - ``` - - 3. Preparing the TEKTON pipeline environment - ``` - 1. Login to SIVT OVA - 2. Browse to the location where the git repo is cloned. - 3. Open launch.sh and update TARBALL_FILE_PATH to the absolute path where the Service Installer Docker tar is downloaded. - For example: - - TARBALL_FILE_PATH="/root/tekton/arcas-tekton-cicd/service_installer_tekton_v153.tar" - or - - TARBALL_URL="http://mynfsserver/images/service_installer_tekton_v153.tar" - 4. Save file and exit. - 5. Open cluster_resources/kind-init-config.yaml. - Provide a free port for the nginx service to use. If not, by default, 80 port is used. - extraPortMappings: - - containerPort: 80 - hostPort: +## Tekton Pipeline Execution + +### Preparation Steps + +1. Git preparation: + + 1. Create a private Git (Gitlab/Github) repository. + 2. Clone the code from https://github.com/vmware-tanzu/service-installer-for-vmware-tanzu/tree /1.3-1.5.4/tekton. + 3. Create a Git personal access token (PAT) and copy the token for later stages. + 4. Prepare `deployment.json` based on your environment. Refer to the SIVT Readme for preparing the config file. + 5. Commit the file under `config/deployment.json` in the Git repository. + +1. SIVT OVA preparation: + 1. Deploy the downloaded SIVT OVA and power on the VM. + 2. Log in to the SIVT OVA. + 3. Clone your private repository by using `git clone `. - 6. ./launch.sh --create-cluster #This will create a kind cluster which is required for TEKTON pipeline - 7. When prompted for docker login, provide the docker login credentials. #This would be an one time effort - ``` -- 4. Preparing TEKTON Dashboard - -TEKTON provides a helpful dashboard for monitoring and triggering pipelines from UI. It is recommended to have dashboard integrated. This step can be skipped, if TEKTON dashboard is not required for your environment -``` -1. Execute ./launch.sh --deploy-dashboard -Exposed port is hostPort set in step 5 of preparing TEKTON environment -``` -- 5. Service Accounts and Secrets preparation -Open values.yaml in the SIVT OVA and update the respective entries. -``` -#@data/values-schema ---- -git: - host: - repository: / - branch: - username: - password: -imagename: docker.io/library/service_installer_tekton:v153 -imagepullpolicy: Never -``` -**Running the PIPELINES** - -- For triggering Day0 bringup of TKGM -```sh -./launch.sh --exec-day0 -``` -**Re-running Pipelines** - -From kubectl - -```sh +1. Tekton pipeline environment preparation: + + 1. Log in to SIVT OVA. + 2. Browse to the location where the Git repository is cloned. + 3. Open `launch.sh` and update `TARBALL_FILE_PATH` to the absolute path where the Service Installer Docker TAR file is downloaded.
+ For example: + - `TARBALL_FILE_PATH="/root/tekton/arcas-tekton-cicd/service_installer_tekton_v153.tar"` +
or + - `TARBALL_URL="http://mynfsserver/images/service_installer_tekton_v153.tar"` + 4. Save the file and exit. + 5. Open `cluster_resources/kind-init-config.yaml`. + + Provide a free port for the nginx service to use. If you do not specify a port, by default 80 port is used. + ``` + extraPortMappings: + - containerPort: 80 + hostPort: + ``` + 6. Run `./launch.sh --create-cluster`. + + This command creates a kind cluster which is required for the Tekton pipeline. + 7. When prompted for the Docker login, provide the docker login credentials. + + This needs to be done one time only. + +1. Tekton dashboard preparation: + + Tekton provides a dashboard for monitoring and triggering pipelines from the UI. It is recommended to have the dashboard integrated. This step can be skipped, if Tekton dashboard is not required for your environment. + - Execute ./launch.sh --deploy-dashboard + + The exposed port is `hostPort` set in step 5 of Tekton environment preparation. + +5. Service accounts and secrets preparation: + - Open `values.yaml` in the SIVT OVA and update the respective entries. + ``` + #@data/values-schema + --- + git: + host: + repository: / + branch: + username: + password: + imagename: docker.io/library/service_installer_tekton:v153 + imagepullpolicy: Never + ``` + +### Running the Pipelines + +- For triggering the Day-0 bringup of Tanzu Kubernetes Grid, run the following command. + ```sh + ./launch.sh --exec-day0 + ``` + +### Rerunning Pipelines + +- For rerunning the pipelines, run one of the following commands. + + ```sh kubectl create -f run/day0-bringup.yml - #or - ./launch.sh --exec-day0 -``` + ``` + or + ```sh + ./launch.sh --exec-day0 + ``` -**Listing Pipelines and taskruns** -``` -Set the kubeconfig for the cluster, by exporting the cluster kind file. -For example: - export KUBECONFIG=/root/tekton/arcas-tekton-cicd/arcas-ci-cd-cluster.yaml -``` +### Listing Pipelines and Task Runs -From tkn +- Set the kubeconfig for the cluster by exporting the cluster kind file. - #for PipelineRuns + For example: + ``` + export KUBECONFIG=/root/tekton/arcas-tekton-cicd/arcas-ci-cd-cluster.yaml + ``` + +- List the pipeline runs: + + ``` tkn pr ls NAME STARTED DURATION STATUS tkgm-bringup-day0-jd2mp 53 minutes ago 58 minutes Succeeded tkgm-bringup-day0-jqkbz 3 hours ago 47 minutes Succeeded - - #for TaskRuns + ``` +- List the task runs: + ``` tkn tr ls NAME STARTED DURATION STATUS tkgm-bringup-day0-jd2mp-start-mgmt-create 46 minutes ago 20 minutes Succeeded tkgm-bringup-day0-jd2mp-start-avi 54 minutes ago 8 minutes Succeeded tkgm-bringup-day0-jd2mp-start-prep-workspace 54 minutes ago 11 seconds Succeeded + ``` +### Monitoring Pipelines -**Monitoring Pipelines** - - - tkn pr logs --follow - #For debugging. - tkn pr desc - +- For monitoring pipelines, use the following command: + ``` + tkn pr logs --follow + ``` +- For debugging, use the following command: + ``` + tkn pr desc + ``` -**Triggering the PIPELINES through git commits** +### Triggering the Pipelines through Git Commits** -TEKTON pipelines also support execution of pipelines based on git commit changes. -1. Complete the Preparation stages from 1 to 5. -2. Install polling operator -```sh -kubectl apply -f https://github.com/bigkevmcd/tekton-polling-operator/releases/download/v0.4.0/release-v0.4.0.yaml -``` -3. Open trigger-bringup-res.yml under trigger-based directory. -4. Update the fields of +Tekton pipelines also support execution of pipelines based on git commit changes. +1. Complete the preparation stages from 1 to 5. +2. Install the polling operator. + ```sh + kubectl apply -f https://github.com/bigkevmcd/tekton-polling-operator/releases/download/v0.4.0/release-v0.4.0.yaml + ``` +3. Open `trigger-bringup-res.yml` under `trigger-based` directory. +4. Update the following fields: - url: UPDATE FULL GIT PATH OF REPOSITORY - ref: BRANCH_NAME - frequency: 2m [time interval to check git changes. 2 minutes is set as default] - type: gitlab/github - Save and exit. -5. Open -6. Update the fields of + + Save changes and exit. +5. Open trigger-bringup-pipeline.yml +6. Update the following fields: - default: "UPDATE IMAGE LOCATION" to docker.io/library/service_installer_tekton:v153 - default: "UPDATE FULL GIT PATH OF REPOSITORY" to full path of the git repository ending with .git - default: main to the branch in the private git repo. - Save and exit. -7. Execute -```sh -kubectl apply -f trigger-bringup-pipeline.yml; kubectl apply -f trigger-bringup-res.yml -``` -8. Check if the pipeline is listed by -```sh -tkn p ls -``` -9. Perform a git commit on the branch with a commit message of "exec_bringup" -10. The pipelines will be triggered automatically. - - - + + Save changes and exit. +7. Execute the following command. + ```sh + kubectl apply -f trigger-bringup-pipeline.yml; + kubectl apply -f trigger-bringup-res.yml + ``` +8. Check if the pipeline is listed by using the following command. + ```sh + tkn p ls + ``` +9. Perform a git commit on the branch with a commit message of "exec_bringup". + + The pipelines will be triggered automatically. diff --git a/tekton/README.md b/tekton/README.md index 47727a2c..210bf0dc 100644 --- a/tekton/README.md +++ b/tekton/README.md @@ -1,158 +1,176 @@ -**TEKTON PIPELINE FOR TKGM** +## Tekton Pipeline for Tanzu Kubernetes Grid -Tekton is a cloud-native solution for building CI/CD systems. SIVT is bundled with Tekton capability which provides the pipelines for DayO deployment and Day2 operations of TKGM 1.5.4 on vSphere backed environment. +Tekton is a cloud-native solution for building CI/CD systems. Service Installer for VMware Tanzu (SIVT) is bundled with Tekton capability which provides the pipelines for Day-O deployment and Day2 operations of Tanzu Kubernetes Grid 1.5.4 on a vSphere backed environment. -**Features** +## Features -- Bringup of Reference Architecture based TKGM environment on vSphere. -- E2E TKGM deployment and configuration of AVI Controller, Management, SharedServices, Workload clusters, Plugins, Extensions -- Rescale and Resize Day2 operations. -- Upcoming support of TKGS E2E deployment* -- Upcoming support of TKGM DAY2 Upgrade support from 1.5.x to 1.5.4 with packages and extensions* +- Bringup of reference architecture based Tanzu Kubernetes Grid environment on vSphere +- E2E Tanzu Kubernetes Grid deployment and configuration of AVI controller, management, shared services, and workload clusters, plugins, extensions +- Rescale and resize Day-2 operations +- Upcoming support of Tanzu Kubernetes Grid Service E2E deployment* +- Upcoming support of Tanzu Kubernetes Grid Day-2 Upgrade support from 1.5.x to 1.5.4 with packages and extensions* -**Pre-requisites** +## Prerequisites -Tekton pipelines execution require the following: +Tekton pipeline execution requires the following: -- Service Installer OVA. Download from Marketplace https://marketplace.cloud.vmware.com/services/details/service-installer-for-vmware-tanzu-1?slug=true +- Service Installer for VMware Tanzu (SIVT) OVA. Download from Marketplace https://marketplace.cloud.vmware.com/services/details/service-installer-for-vmware-tanzu-1?slug=true - Docker login - Service Installer Tekton Docker tar file. service_installer_tekton_v154.tar - Private git repo -**Execution** - - 1. GIT Preparation - ``` - 1. Prepare a private git (gitlab/github) repo - 2. Clone the code from https://github.com/vmware-tanzu/service-installer-for-vmware-tanzu/tree/1.3-1.5.4/tekton - 3. Create a GIT PAT and copy the token for later stages - 4. Prepare deployment.json based on your environment. Refer SIVT Readme for preparing the config file. - 5. Commit the file under config/deployment.json in the git repo. - ``` - - 2. SIVT OVA Preparation - ``` - 1. Deploy the download SIVT OVA and power on the VM - 2. Login to the SIVT OVA - 3. Clone your private repository by - #git clone - ``` - - 3. Preparing the TEKTON pipeline environment - ``` - 1. Login to SIVT OVA - 2. Browse to the location where the git repo is cloned. - 3. Open launch.sh and update TARBALL_FILE_PATH to the absolute path where the Service Installer Docker tar is downloaded. - For example: - - TARBALL_FILE_PATH="/root/tekton/arcas-tekton-cicd/service_installer_tekton_v153.tar" - or - - TARBALL_URL="http://mynfsserver/images/service_installer_tekton_v153.tar" - 4. Save file and exit. - 5. Open cluster_resources/kind-init-config.yaml. - Provide a free port for the nginx service to use. If not, by default, 80 port is used. - extraPortMappings: - - containerPort: 80 - hostPort: +## Tekton Pipeline Execution + +### Preparation Steps + +1. Git preparation: + + 1. Create a private Git (Gitlab/Github) repository. + 2. Clone the code from https://github.com/vmware-tanzu/service-installer-for-vmware-tanzu/tree /1.3-1.5.4/tekton. + 3. Create a Git personal access token (PAT) and copy the token for later stages. + 4. Prepare `deployment.json` based on your environment. Refer to the SIVT Readme for preparing the config file. + 5. Commit the file under `config/deployment.json` in the Git repository. + +1. SIVT OVA preparation: + 1. Deploy the downloaded SIVT OVA and power on the VM. + 2. Log in to the SIVT OVA. + 3. Clone your private repository by using `git clone `. - 6. ./launch.sh --create-cluster #This will create a kind cluster which is required for TEKTON pipeline - 7. When prompted for docker login, provide the docker login credentials. #This would be an one time effort - ``` -- 4. Preparing TEKTON Dashboard - -TEKTON provides a helpful dashboard for monitoring and triggering pipelines from UI. It is recommended to have dashboard integrated. This step can be skipped, if TEKTON dashboard is not required for your environment -``` -1. Execute ./launch.sh --deploy-dashboard -Exposed port is hostPort set in step 5 of preparing TEKTON environment -``` -- 5. Service Accounts and Secrets preparation -Open values.yaml in the SIVT OVA and update the respective entries. -``` -#@data/values-schema ---- -git: - host: - repository: / - branch: - username: - password: -imagename: docker.io/library/service_installer_tekton:v153 -imagepullpolicy: Never -``` -**Running the PIPELINES** - -- For triggering Day0 bringup of TKGM -```sh -./launch.sh --exec-day0 -``` -**Re-running Pipelines** - -From kubectl - -```sh +1. Tekton pipeline environment preparation: + + 1. Log in to SIVT OVA. + 2. Browse to the location where the Git repository is cloned. + 3. Open `launch.sh` and update `TARBALL_FILE_PATH` to the absolute path where the Service Installer Docker TAR file is downloaded.
+ For example: + - `TARBALL_FILE_PATH="/root/tekton/arcas-tekton-cicd/service_installer_tekton_v153.tar"` +
or + - `TARBALL_URL="http://mynfsserver/images/service_installer_tekton_v153.tar"` + 4. Save the file and exit. + 5. Open `cluster_resources/kind-init-config.yaml`. + + Provide a free port for the nginx service to use. If you do not specify a port, by default 80 port is used. + ``` + extraPortMappings: + - containerPort: 80 + hostPort: + ``` + 6. Run `./launch.sh --create-cluster`. + + This command creates a kind cluster which is required for the Tekton pipeline. + 7. When prompted for the Docker login, provide the docker login credentials. + + This needs to be done one time only. + +1. Tekton dashboard preparation: + + Tekton provides a dashboard for monitoring and triggering pipelines from the UI. It is recommended to have the dashboard integrated. This step can be skipped, if Tekton dashboard is not required for your environment. + - Execute ./launch.sh --deploy-dashboard + + The exposed port is `hostPort` set in step 5 of Tekton environment preparation. + +5. Service accounts and secrets preparation: + - Open `values.yaml` in the SIVT OVA and update the respective entries. + ``` + #@data/values-schema + --- + git: + host: + repository: / + branch: + username: + password: + imagename: docker.io/library/service_installer_tekton:v153 + imagepullpolicy: Never + ``` + +### Running the Pipelines + +- For triggering the Day-0 bringup of Tanzu Kubernetes Grid, run the following command. + ```sh + ./launch.sh --exec-day0 + ``` + +### Rerunning Pipelines + +- For rerunning the pipelines, run one of the following commands. + + ```sh kubectl create -f run/day0-bringup.yml - #or - ./launch.sh --exec-day0 -``` + ``` + or + ```sh + ./launch.sh --exec-day0 + ``` -**Listing Pipelines and taskruns** -``` -Set the kubeconfig for the cluster, by exporting the cluster kind file. -For example: - export KUBECONFIG=/root/tekton/arcas-tekton-cicd/arcas-ci-cd-cluster.yaml -``` +### Listing Pipelines and Task Runs -From tkn +- Set the kubeconfig for the cluster by exporting the cluster kind file. - #for PipelineRuns + For example: + ``` + export KUBECONFIG=/root/tekton/arcas-tekton-cicd/arcas-ci-cd-cluster.yaml + ``` + +- List the pipeline runs: + + ``` tkn pr ls NAME STARTED DURATION STATUS tkgm-bringup-day0-jd2mp 53 minutes ago 58 minutes Succeeded tkgm-bringup-day0-jqkbz 3 hours ago 47 minutes Succeeded - - #for TaskRuns + ``` +- List the task runs: + ``` tkn tr ls NAME STARTED DURATION STATUS tkgm-bringup-day0-jd2mp-start-mgmt-create 46 minutes ago 20 minutes Succeeded tkgm-bringup-day0-jd2mp-start-avi 54 minutes ago 8 minutes Succeeded tkgm-bringup-day0-jd2mp-start-prep-workspace 54 minutes ago 11 seconds Succeeded + ``` +### Monitoring Pipelines -**Monitoring Pipelines** - - - tkn pr logs --follow - #For debugging. - tkn pr desc - +- For monitoring pipelines, use the following command: + ``` + tkn pr logs --follow + ``` +- For debugging, use the following command: + ``` + tkn pr desc + ``` -**Triggering the PIPELINES through git commits** +### Triggering the Pipelines through Git Commits** -TEKTON pipelines also support execution of pipelines based on git commit changes. -1. Complete the Preparation stages from 1 to 5. -2. Install polling operator -```sh -kubectl apply -f https://github.com/bigkevmcd/tekton-polling-operator/releases/download/v0.4.0/release-v0.4.0.yaml -``` -3. Open trigger-bringup-res.yml under trigger-based directory. -4. Update the fields of +Tekton pipelines also support execution of pipelines based on git commit changes. +1. Complete the preparation stages from 1 to 5. +2. Install the polling operator. + ```sh + kubectl apply -f https://github.com/bigkevmcd/tekton-polling-operator/releases/download/v0.4.0/release-v0.4.0.yaml + ``` +3. Open `trigger-bringup-res.yml` under `trigger-based` directory. +4. Update the following fields: - url: UPDATE FULL GIT PATH OF REPOSITORY - ref: BRANCH_NAME - frequency: 2m [time interval to check git changes. 2 minutes is set as default] - type: gitlab/github - Save and exit. -5. Open -6. Update the fields of + + Save changes and exit. +5. Open trigger-bringup-pipeline.yml +6. Update the following fields: - default: "UPDATE IMAGE LOCATION" to docker.io/library/service_installer_tekton:v153 - default: "UPDATE FULL GIT PATH OF REPOSITORY" to full path of the git repository ending with .git - default: main to the branch in the private git repo. - Save and exit. -7. Execute -```sh -kubectl apply -f trigger-bringup-pipeline.yml; kubectl apply -f trigger-bringup-res.yml -``` -8. Check if the pipeline is listed by -```sh -tkn p ls -``` -9. Perform a git commit on the branch with a commit message of "exec_bringup" -10. The pipelines will be triggered automatically. - - - + + Save changes and exit. +7. Execute the following command. + ```sh + kubectl apply -f trigger-bringup-pipeline.yml; + kubectl apply -f trigger-bringup-res.yml + ``` +8. Check if the pipeline is listed by using the following command. + ```sh + tkn p ls + ``` +9. Perform a git commit on the branch with a commit message of "exec_bringup". + + The pipelines will be triggered automatically.