permalink |
---|
/guides/pipelines-getting-started/ |
Kabanero uses Tekton Pipelines to illustrate a continuous input and continuous delivery (CI/CD) workflow. Each of the featured Kabanero collections contains a default pipeline that builds the collection, publishes the image to a Docker repo, and then deploys the application to the Kubernetes cluster. You can also create your own tasks and pipelines and add them to the collections. All tasks and pipelines are activated by the Kabanero operator.
To learn more about Tekton Pipelines and creating new tasks, see the Tekton Pipeline tutorial.
Each Kabanero collection contains a set of default tasks and pipelines that are created when the collection is built. The collections build process copies the task and pipeline files from the collections repo to your /pipelines
directory.
If you are building a new collection, the default tasks and pipelines are automatically pulled into your collections repo. If you want to customize the default tasks or pipelines to meet your needs, you can choose to apply your changes either to all collections or to a specific collection. To apply your changes to all the collections, update the files in the incubator/common/pipelines/default
directory in your collections repo. If you want to update tasks or pipelines in a particular collection, make your changes in the pipelines directory under that collection in your collections repo.
This file is the primary pipeline that is associated with the collections. It validates that the collection is active, builds the Git source, publishes the container image, and deploys the application. It looks for git-source
and docker-image
resources that are used by the build-task
and deploy-task
files.
This file builds a container image from the artifacts in the git-source repository by using Buildah. After the image is built, the build-task
file publishes it to the docker-image URL by using Buildah.
This file modifies the app-deploy.yaml
file, which describes the deployment options for the application. Deploy-task
modifies app-deploy.yaml
to point to the image that was published and deploys the application by using the Appsody operator. Generate your app-deploy.yaml
file by running the appsody deploy --generate-only
command.
By default, the pipelines run and deploy the application in the kabanero
namespace. If you want to deploy the application in a different namespace, update the app-deploy.yaml
file to point to that namespace.
For more information, see the kabanero-pipelines repo.
Explore how to use Kabanero Pipelines to build and manage Kabanero collections.
-
Kabanero foundation must be installed on Red Hat Origin Community Distribution of Kubernetes (OKD) or the OpenShift Container Platform (OCP) cluster.
-
Tekton Dashboard is installed by default with Kabanero Foundation. To find the Tekton Dashboard URL, login to your OKD cluster and run
oc get routes
or check in the OKD console. You can also find the dashboard URL from the Kabanero landing page in the OKD or OCP console. -
Dynamic volume provisioning or a persistent volume must be configured. See the following section for details.
Tekton pipelines require a configured volume that is used by the framework to share data across tasks. The build task, which uses Buildah, also requires a volume mount. The minimum required volume size is five Gi.
If your cluster is running in a public cloud, dynamic volume provisioning is the easier option. For more information, see How are resources shared between tasks.
Your persistent volume must be configured in accordance with your cloud provider’s default storage class. Otherwise, your pipelines might create a new volume for each run, which increases your pipeline run execution time.
Alternatively, if you are using an unmanaged cluster, you can set up a persistent volume to be used by the pipeline. A simple persistent volume definition is provided in the following example. Update the path and other values in your pv.yaml
file to suit your requirements. For instance, the following example is configured for local storage. You might edit this file to configure it for NFS, the standard configuration in a production environment.
-
Log in to your cluster. For example, for OKD
oc login <master node IP>:8443
-
Clone the kabanero-pipeplines repo
git clone https://github.com/kabanero-io/kabanero-pipelines
-
Clone the
pv.yaml
file in this repo and apply it.cd ./pipelines/common oc apply -f pv.yaml -n kabanero
Git secrets must be created in the kabanero
namespace and associated with the service account that runs the pipelines. To configure secrets by using the Tekton Dashboard, see
Create secrets.
Alternatively, you can configure secrets in the OKD console or set them up by using the OKD CLI.
You can use the Tekton Dashboard Webhook Extension to drive pipelines that automatically build and deploy an application whenever you update the code in your Git repo. Events such as commits or pull requests can be set up to automatically trigger pipeline runs.
If you are developing a new pipeline and want to manually test it in a tight loop, you might want to use a script or manually drive the pipeline.
-
Log in to your cluster. For example for OKD,
oc login <master node IP>:8443
-
Clone the Kabanero pipelines repo
git clone https://github.com/kabanero-io/kabanero-pipelines
-
Run the following script with the appropriate parameters
cd ./pipelines/incubator/manual-pipeline-runs/
./manual-pipeline-run-script.sh -r [git_repo of the Appsody project] -i [docker registery path of the image to be created] -c [collections name of which pipeline to be run]"
For example:
./manual-pipeline-run-script.sh -r https://github.com/mygitid/appsody-test-project -i index.docker.io/mydockeid/my-java-microprofile-image -c java-microprofile"
-
Login to your cluster. For example for OKD,
oc login <master node IP>:8443
-
Clone the Kabanero pipelines repo.
git clone https://github.com/kabanero-io/kabanero-pipelines cd kabanero-pipelines
-
Create Pipeline resources.
Use thepipeline-resource-template.yaml
file to create thePipelineResources
. Thepipeline-resource-template.yaml
is provided in the Kabanero pipelinesmanual-pipeline-runs
directory. Update the docker-image URL. You can use the sample GitHub rep or update it to point to your own GitHub repo. -
After you update the file, apply it as shown in the following example:
oc apply -f <collection-name>-pipeline-resources.yaml
The installations that activate the featured collections also activate the tasks and pipelines. If you are creating a new task or pipeline, activate it manually, as shown in the following example.
oc apply -f <task.yaml>
oc apply -f <pipeline.yaml>
A sample manual-pipeline-run-template.yaml
file is provided in the /pipelines/manual-pipeline-runs
directory. Rename the template file to pipeline-run.yaml
, for example, and update the file to replace collection-name
with the name of your collection. After you update the file, run it as shown in the following example.
oc apply -f <collection-name>-pipeline-run.yaml
You can check the status of the pipeline run from the OKD console, command line, or Tekton dashboard.
-
Log in to the Tekton Dashboard and click `Pipeline runs' in the sidebar menu.
-
Find your pipeline run in the list and click it to check the status and find logs. You can see logs and status of each step and task.
Enter the following command in the terminal:
oc get pipelineruns
oc -n kabanero describe pipelinerun.tekton.dev/<pipeline-run-name>
You can also see pods for the pipeline runs, for which you can specify oc describe
and oc logs
for to get more details.
If the pipeline run was successful, you can see a Docker image in our Docker registry and a pod that’s running your application.
To find solutions for common issues and troubleshoot problems with pipelines, see the Kabanero Pipelines Troubleshooting Guide.