From 5df87899038bd6205b51d87f0954d2b3b616868a Mon Sep 17 00:00:00 2001 From: Moritz Wiesinger Date: Mon, 13 Feb 2023 13:36:51 +0100 Subject: [PATCH] chore: fix markdown linter errors (#824) --- CONTRIBUTING.md | 43 ++-- README.md | 207 +++++++++++------- assets/logo/README.md | 7 +- dashboards/grafana/README.md | 16 +- dashboards/grafana/configmap/README.md | 9 +- dashboards/grafana/import/README.md | 6 +- docs/README.md | 17 +- docs/content/_index.md | 28 ++- docs/content/en/docs/concepts/apps/_index.md | 32 +-- .../en/docs/concepts/evaluations/_index.md | 5 +- .../en/docs/concepts/metrics/_index.md | 51 +++-- .../overview/klc-cert-manager/_index.md | 16 +- docs/content/en/docs/concepts/tasks/_index.md | 26 ++- .../en/docs/concepts/workloads/_index.md | 15 +- docs/content/en/docs/crd-ref/_index.md | 4 +- docs/content/en/docs/crd-ref/crd-template.md | 10 +- docs/content/en/docs/getting-started.md | 116 ++++++---- .../content/en/docs/snippets/tasks/install.md | 17 +- .../docs/snippets/tasks/k8s_version_output.md | 6 +- .../en/docs/tasks/add-app-awareness/index.md | 46 ++-- .../implement-slack-notification/_index.md | 18 +- docs/content/en/docs/tasks/install/_index.md | 2 +- .../restart-application-deployment/_index.md | 54 +++-- .../en/docs/tasks/write-tasks/_index.md | 6 +- docs/markdownlint-rules.yaml | 1 + examples/sample-app/README.md | 41 ++-- examples/support/argo/README.md | 68 ++++-- examples/support/observability/README.md | 98 +++++---- .../observability/config/prometheus/README.md | 9 +- functions-runtime/README.md | 41 +++- helm/chart/README.md | 9 +- klt-cert-manager/README.md | 58 +++-- operator/README.md | 53 +++-- operator/test/component/DEVELOPER.md | 129 ++++++----- operator/test/e2e/DEVELOPER.md | 40 ++-- scheduler/README.md | 42 ++-- scheduler/test/e2e/DEVELOPER.md | 36 +-- 37 files changed, 831 insertions(+), 551 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 0b6eb7347a..e53edba53b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,40 +1,45 @@ # Contributing to the Keptn Lifecycle Toolkit -We are thrilled to have you join us as a contributor! The Keptn Lifecycle Toolkit is a community-driven project and we greatly value collaboration. There are various ways to contribute to the Lifecycle Toolkit, and all contributions are highly valued. Please, explore the options below to learn more about how you can contribute. +We are thrilled to have you join us as a contributor! +The Keptn Lifecycle Toolkit is a community-driven project and we greatly value collaboration. +There are various ways to contribute to the Lifecycle Toolkit, and all contributions are highly valued. +Please, explore the options below to learn more about how you can contribute. -# How to Contribute ## Prerequisites + ## Linters This project uses a set of linters to ensure good code quality. In order to make proper use of those linters inside an IDE, the following configuration is required. -Further information can also be found in the [`golangci-lint` documentation](https://golangci-lint.run/usage/integrations/). +Further information can also be found in +the [`golangci-lint` documentation](https://golangci-lint.run/usage/integrations/). ### Visual Studio Code -In Visual Studio Code the [Golang](https://marketplace.visualstudio.com/items?itemName=aldijav.golangwithdidi) extension is required. +In Visual Studio Code the [Golang](https://marketplace.visualstudio.com/items?itemName=aldijav.golangwithdidi) +extension is required. Adding the following lines to the `Golang` extension configuration file will enable all linters used in this project. -``` +```json "go.lintTool": { - "type": "string", - "default": "golangci-lint", - "description": "GolangGCI Linter", - "scope": "resource", - "enum": [ - "golangci-lint", - ] + "type": "string", + "default": "golangci-lint", + "description": "GolangGCI Linter", + "scope": "resource", + "enum": [ + "golangci-lint", + ] }, "go.lintFlags": { - "type": "array", - "items": { - "type": "string" - }, - "default": ["--fast", "--fix"], - "description": "Flags to pass to GCI Linter", - "scope": "resource" + "type": "array", + "items": { + "type": "string" + }, + "default": ["--fast", "--fix"], + "description": "Flags to pass to GCI Linter", + "scope": "resource" }, ``` diff --git a/README.md b/README.md index 5796565d80..e9129f2a76 100644 --- a/README.md +++ b/README.md @@ -7,8 +7,10 @@ ![status](https://img.shields.io/badge/status-not--for--production-red) [![GitHub Discussions](https://img.shields.io/github/discussions/keptn/lifecycle-toolkit)](https://github.com/keptn/lifecycle-toolkit/discussions) -The goal of this toolkit is to introduce a more “cloud-native” approach for pre- and post-deployment, as well as the concept of application health checks. -It is an incubating project, under the umbrella of the [Keptn Application Lifecycle working group](https://github.com/keptn/wg-app-lifecycle). +The goal of this toolkit is to introduce a more “cloud-native” approach for pre- and post-deployment, as well as the +concept of application health checks. +It is an incubating project, under the umbrella of +the [Keptn Application Lifecycle working group](https://github.com/keptn/wg-app-lifecycle). ## Watch the KubeCon 2022 Detroit Demo @@ -18,16 +20,16 @@ Click to watch it on YouTube: ## Deploy the latest release -**Known Limitations** -* Kubernetes >=1.24 is needed to deploy the Lifecycle Toolkit -* The Lifecycle Toolkit is currently not compatible with [vcluster](https://github.com/loft-sh/vcluster) +### Known Limitations -**Installation** +- Kubernetes >=1.24 is needed to deploy the Lifecycle Toolkit +- The Lifecycle Toolkit is currently not compatible with [vcluster](https://github.com/loft-sh/vcluster) +### Installation -``` +```shell kubectl apply -f https://github.com/keptn/lifecycle-toolkit/releases/download/v0.5.0/manifest.yaml ``` @@ -36,12 +38,17 @@ kubectl apply -f https://github.com/keptn/lifecycle-toolkit/releases/download/v0 to install the latest release of the Lifecycle Toolkit. The Lifecycle Toolkit uses the OpenTelemetry collector to provide a vendor-agnostic implementation of how to receive, -process and export telemetry data. To install it, follow their [installation instructions](https://opentelemetry.io/docs/collector/getting-started/). +process and export telemetry data. To install it, follow +their [installation instructions](https://opentelemetry.io/docs/collector/getting-started/). We also provide some more information about this in our [observability example](./examples/support/observability/). -The Lifecycle Toolkit includes a Mutating Webhook which requires TLS certificates to be mounted as a volume in its pod. The certificate creation -is handled automatically by [klt-cert-manager](https://github.com/keptn/lifecycle-toolkit/blob/main/klt-cert-manager/README.md). Versions 0.5.0 and earlier have a hard dependency on the [cert-manager](https://cert-manager.io). -See [installation guideline](https://github.com/keptn/lifecycle-toolkit/blob/main/docs/content/docs/snippets/tasks/install.md) for more info. +The Lifecycle Toolkit includes a Mutating Webhook which requires TLS certificates to be mounted as a volume in its pod. +The certificate creation +is handled automatically +by [klt-cert-manager](https://github.com/keptn/lifecycle-toolkit/blob/main/klt-cert-manager/README.md). Versions 0.5.0 +and earlier have a hard dependency on the [cert-manager](https://cert-manager.io). +See [installation guideline](https://github.com/keptn/lifecycle-toolkit/blob/main/docs/content/docs/snippets/tasks/install.md) +for more info. ## Goals @@ -53,15 +60,19 @@ The Keptn Lifecycle Toolkit aims to support Cloud Native teams with: - Standardized way for pre- and post-deployment tasks - Provide out-of-the-box Observability of the deployment cycle -![](./assets/operator-maturity.jpg) +![Operator Maturity Model with third level circled in](./assets/operator-maturity.jpg) -The Keptn Lifecycle Toolkit could be seen as a general purpose and declarative [Level 3 operator](https://operatorframework.io/operator-capabilities/) for your Application. -For this reason, the Keptn Lifecycle Toolkit is agnostic to deployment tools that are used and works with any GitOps solution. +The Keptn Lifecycle Toolkit could be seen as a general purpose and +declarative [Level 3 operator](https://operatorframework.io/operator-capabilities/) for your Application. +For this reason, the Keptn Lifecycle Toolkit is agnostic to deployment tools that are used and works with any GitOps +solution. ## How to use -The Keptn Lifecycle Toolkit monitors manifests that have been applied against the Kubernetes API and reacts if it finds a workload with special annotations/labels. -For this, you should annotate your [Workload](https://kubernetes.io/docs/concepts/workloads/) with (at least) the following annotations: +The Keptn Lifecycle Toolkit monitors manifests that have been applied against the Kubernetes API and reacts if it finds +a workload with special annotations/labels. +For this, you should annotate your [Workload](https://kubernetes.io/docs/concepts/workloads/) with (at least) the +following annotations: ```yaml keptn.sh/app: myAwesomeAppName @@ -69,7 +80,9 @@ keptn.sh/workload: myAwesomeWorkload keptn.sh/version: myAwesomeWorkloadVersion ``` -Alternatively, you can use Kubernetes [Recommended Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/) to annotate your workload: +Alternatively, you can use +Kubernetes [Recommended Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/) to +annotate your workload: ```yaml app.kubernetes.io/part-of: myAwesomeAppName @@ -77,7 +90,9 @@ app.kubernetes.io/name: myAwesomeWorkload app.kubernetes.io/version: myAwesomeWorkloadVersion ``` -In general, the Keptn Annotations/Labels take precedence over the Kubernetes recommended labels. If there is no version annotation/label and there is only one container in the pod, the Lifecycle Toolkit will take the image tag as version (if it is not "latest"). +In general, the Keptn Annotations/Labels take precedence over the Kubernetes recommended labels. If there is no version +annotation/label and there is only one container in the pod, the Lifecycle Toolkit will take the image tag as version ( +if it is not "latest"). In case you want to run pre- and post-deployment checks, further annotations are necessary: @@ -86,11 +101,15 @@ keptn.sh/pre-deployment-tasks: verify-infrastructure-problems keptn.sh/post-deployment-tasks: slack-notification,performance-test ``` -The value of these annotations are Keptn [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +The value of these annotations are +Keptn [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called [KeptnTaskDefinition](#keptn-task-definition)s. These CRDs contains re-usable "functions" that can -executed before and after the deployment. In this example, before the deployment starts, a check for open problems in your infrastructure -is performed. If everything is fine, the deployment continues and afterward, a slack notification is sent with the result of -the deployment and a pipeline to run performance tests is invoked. Otherwise, the deployment is kept in a pending state until +executed before and after the deployment. In this example, before the deployment starts, a check for open problems in +your infrastructure +is performed. If everything is fine, the deployment continues and afterward, a slack notification is sent with the +result of +the deployment and a pipeline to run performance tests is invoked. Otherwise, the deployment is kept in a pending state +until the infrastructure is capable to accept deployments again. A more comprehensive example can be found in our [examples folder](./examples/sample-app/) where we @@ -109,11 +128,11 @@ Afterward, you can monitor the status of the deployment using kubectl get keptnworkloadinstance -n podtato-kubectl -w ``` -The deployment for a Workload will stay in a `Pending` state until the respective pre-deployment check is completed. Afterward, the deployment will start and when it is `Succeeded`, the post-deployment checks will start. +The deployment for a Workload will stay in a `Pending` state until the respective pre-deployment check is completed. +Afterward, the deployment will start and when it is `Succeeded`, the post-deployment checks will start. ## Architecture - The Keptn Lifecycle Toolkit is composed of the following components: - Keptn Lifecycle Operator @@ -123,25 +142,30 @@ The Keptn Lifecycle Operator contains several controllers for Keptn CRDs and a M The Keptn Scheduler ensures that Pods are started only after the pre-deployment checks have finished. A Kubernetes Manifest, which is annotated with Keptn specific annotations, gets applied to the Kubernetes Cluster. -Afterward, the Keptn Scheduler gets injected (via Mutating Webhook), and Kubernetes Events for Pre-Deployment are sent to the event stream. +Afterward, the Keptn Scheduler gets injected (via Mutating Webhook), and Kubernetes Events for Pre-Deployment are sent +to the event stream. The Event Controller watches for events and triggers a Kubernetes Job to fullfil the Pre-Deployment. After the Pre-Deployment has finished, the Keptn Scheduler schedules the Pod to be deployed. -The KeptnApp and KeptnWorkload Controllers watch for the workload resources to finish and then generate a Post-Deployment Event. -After the Post-Deployment checks, SLOs can be validated using an interface for retrieving SLI data from a provider, e.g, [Prometheus](https://prometheus.io/). -Finally, Keptn Lifecycle Toolkit exposes Metrics and Traces of the whole Deployment cycle with [OpenTelemetry](https://opentelemetry.io/). +The KeptnApp and KeptnWorkload Controllers watch for the workload resources to finish and then generate a +Post-Deployment Event. +After the Post-Deployment checks, SLOs can be validated using an interface for retrieving SLI data from a provider, +e.g, [Prometheus](https://prometheus.io/). +Finally, Keptn Lifecycle Toolkit exposes Metrics and Traces of the whole Deployment cycle +with [OpenTelemetry](https://opentelemetry.io/). -![](./assets/architecture.png) +![KLT Architecture](./assets/architecture.png) ## How it works -The following sections will provide insights on each component of the Keptn Lifecycle Toolkit in terms of their purpose, responsibility, and communication with other components. +The following sections will provide insights on each component of the Keptn Lifecycle Toolkit in terms of their purpose, +responsibility, and communication with other components. Furthermore, there will be a description on what CRD they monitor and a general overview of their fields. ### Webhook Annotating a namespace subjects it to the effects of the mutating webhook: -``` +```yaml apiVersion: v1 kind: Namespace metadata: @@ -149,21 +173,26 @@ metadata: annotations: keptn.sh/lifecycle-toolkit: "enabled" # this lines tells the webhook to handle the namespace ``` + However, the mutating webhook will modify only resources in the annotated namespace that have Keptn annotations. When the webhook receives a request for a new pod, it will look for the workload annotations: +```yaml +keptn.sh/workload: "some-workload-name" ``` -keptn.sh/workload -``` -The mutation consists in changing the scheduler used for the deployment with the Keptn Scheduler. Webhook then creates a workload and app resource per annotated resource. + +The mutation consists in changing the scheduler used for the deployment with the Keptn Scheduler. Webhook then creates a +workload and app resource per annotated resource. You can also specify a custom app definition with the annotation: +```yaml +keptn.sh/app: "your-app-name" ``` -keptn.sh/app -``` + In this case the webhook will not generate an app, but it will expect that the user will provide one. The webhook should be as fast as possible and should not create/change any resource. -Additionally, it will compute a version string, using a hash function that takes certain properties of the pod as parameters +Additionally, it will compute a version string, using a hash function that takes certain properties of the pod as +parameters (e.g. the images of its containers). Next, it will look for an existing instance of a `Workload CRD` for the given workload name: @@ -171,66 +200,79 @@ Next, it will look for an existing instance of a `Workload CRD` for the given wo In addition, it will include a reference to the ReplicaSet UID of the pod (i.e. the Pods owner), or the pod itself, if it does not have an owner. - If it does not find a workload instance, it will create one containing the previously computed version string. - In addition, it will include a reference to the ReplicaSet UID of the pod (i.e. the Pods owner), or the pod itself, if it does not have an owner. + In addition, it will include a reference to the ReplicaSet UID of the pod (i.e. the Pods owner), or the pod itself, if + it does not have an owner. It will use the following annotations for the specification of the pre/post deployment checks that should be executed for the `Workload`: - - `keptn.sh/pre-deployment-tasks: task1,task2` - - `keptn.sh/post-deployment-tasks: task1,task2` +- `keptn.sh/pre-deployment-tasks: task1,task2` +- `keptn.sh/post-deployment-tasks: task1,task2` and for the Evaluations: - - `keptn.sh/pre-deployment-evaluations: my-evaluation-definition` - - `keptn.sh/post-deployment-evaluations: my-eval-definition` - -After either one of those actions has been taken, the webhook will set the scheduler of the pod and allow the pod to be scheduled. +- `keptn.sh/pre-deployment-evaluations: my-evaluation-definition` +- `keptn.sh/post-deployment-evaluations: my-eval-definition` +After either one of those actions has been taken, the webhook will set the scheduler of the pod and allow the pod to be +scheduled. ### Scheduler -After the Webhook mutation, the Keptn-Scheduler will handle the annotated resources. The scheduling flow follows the default scheduler behavior, -since it implements a scheduler plugin based on the [scheduling framework]( https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/). -For each pod, at the very end of the scheduling cycle, the plugin verifies whether the pre deployment checks have terminated, by retrieving the current status of the WorkloadInstance. Only if that is successful, the pod is bound to a node. - +After the Webhook mutation, the Keptn-Scheduler will handle the annotated resources. The scheduling flow follows the +default scheduler behavior, +since it implements a scheduler plugin based on +the [scheduling framework]( https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/). +For each pod, at the very end of the scheduling cycle, the plugin verifies whether the pre deployment checks have +terminated, by retrieving the current status of the WorkloadInstance. Only if that is successful, the pod is bound to a +node. ### Keptn App An App contains information about all workloads and checks associated with an application. -It will use the following structure for the specification of the pre/post deployment and pre/post evaluations checks that should be executed at app level: +It will use the following structure for the specification of the pre/post deployment and pre/post evaluations checks +that should be executed at app level: -``` +```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 kind: KeptnApp metadata: -name: podtato-head -namespace: podtato-kubectl + name: podtato-head + namespace: podtato-kubectl spec: -version: "1.3" -revision: "1" -workloads: -- name: podtato-head-left-arm -version: 0.1.0 -- name: podtato-head-left-leg -postDeploymentTasks: -- post-deployment-hello -preDeploymentEvaluations: -- my-prometheus-definition + version: "1.3" + revision: 1 + workloads: + - name: podtato-head-left-arm + version: 0.1.0 + - name: podtato-head-left-leg + version: 1.2.3 + postDeploymentTasks: + - post-deployment-hello + preDeploymentEvaluations: + - my-prometheus-definition ``` -While changes in the workload version will affect only workload checks, a change in the app version will also cause a new execution of app level checks. + +While changes in the workload version will affect only workload checks, a change in the app version will also cause a +new execution of app level checks. ### Keptn Workload -A Workload contains information about which tasks should be performed during the `preDeployment` as well as the `postDeployment` -phase of a deployment. In its state it keeps track of the currently active `Workload Instances`, which are responsible for doing those checks for +A Workload contains information about which tasks should be performed during the `preDeployment` as well as +the `postDeployment` +phase of a deployment. In its state it keeps track of the currently active `Workload Instances`, which are responsible +for doing those checks for a particular instance of a Deployment/StatefulSet/ReplicaSet (e.g. a Deployment of a certain version). ### Keptn Workload Instance -A Workload Instance is responsible for executing the pre- and post deployment checks of a workload. In its state, it keeps track of the current status of all checks, as well as the overall state of +A Workload Instance is responsible for executing the pre- and post deployment checks of a workload. In its state, it +keeps track of the current status of all checks, as well as the overall state of the Pre Deployment phase, which can be used by the scheduler to tell that a pod can be allowed to be placed on a node. -Workload Instances have a reference to the respective Deployment/StatefulSet/ReplicaSet, to check if it has reached the desired state. If it detects that the referenced object has reached -its desired state (e.g. all pods of a deployment are up and running), it will be able to tell that a `PostDeploymentCheck` can be triggered. +Workload Instances have a reference to the respective Deployment/StatefulSet/ReplicaSet, to check if it has reached the +desired state. If it detects that the referenced object has reached +its desired state (e.g. all pods of a deployment are up and running), it will be able to tell that +a `PostDeploymentCheck` can be triggered. ### Keptn Task Definition @@ -263,7 +305,8 @@ spec: In the code section, it is possible to define a full-fletched Deno script. A further example, is available [here](./examples/taskonly-hello-keptn/inline/taskdefinition.yaml). -To runtime can also fetch the script on the fly from a remote webserver. For this, the CRD should look like the following: +To runtime can also fetch the script on the fly from a remote webserver. For this, the CRD should look like the +following: ```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 @@ -302,8 +345,8 @@ The Lifecycle Toolkit passes the values defined inside the `map` field as a JSON At the moment, multi-level maps are not supported. The JSON object can be read through the environment variable `DATA` using `Deno.env.get("DATA");`. K8s secrets can also be passed to the function using the `secureParameters` field. -Here, the `secret` value is the K8s secret name that will be mounted into the runtime and made available to the function via the environment variable `SECURE_DATA`. - +Here, the `secret` value is the K8s secret name that will be mounted into the runtime and made available to the function +via the environment variable `SECURE_DATA`. ### Keptn Task @@ -312,6 +355,7 @@ The execution is done spawning a K8s Job to handle a single Task. In its state, it keeps track of the current status of the K8s Job created. ### Keptn Evaluation Definition + A `KeptnEvaluationDefinition` is a CRD used to define evaluation tasks that can be run by the Keptn Lifecycle Toolkit as part of pre- and post-analysis phases of a workload or application. @@ -333,9 +377,9 @@ spec: evaluationTarget: >4 ``` - ### Keptn Evaluation Provider -A `KeptnEvaluationProvider` is a CRD used to define evaluation provider, which will provide data for the + +A `KeptnEvaluationProvider` is a CRD used to define evaluation provider, which will provide data for the pre- and post-analysis phases of a workload or application. A Keptn evaluation provider looks like the following: @@ -347,10 +391,12 @@ metadata: name: prometheus spec: targetServer: "http://prometheus-k8s.monitoring.svc.cluster.local:9090" - secretName: prometheusLoginCredentials + secretKeyRef: + key: prometheusLoginCredentials ``` ### Keptn Metric + A `KeptnMetric` is a CRD used to define SLI provider with a query and to store metric data fetched from the provider. Providing the metrics as CRD into a K8s cluster will facilitate the reusability of this data across multiple components. Furthermore, this allows using multiple observability platforms for different metrics. @@ -370,9 +416,12 @@ spec: fetchIntervalSeconds: 5 ``` -To be able to use `KeptnMetric` as part of your evaluation, you need to add `keptn-metric` as your value for `.spec.source` in `KeptnEvaluationDefiniton`. Further you need specify -the `.spec.objectives[i].name` of `KeptnEvaluationDefiniton` to the same value as it is stored in `.metadata.name` of `KeptnMetric` resource. The `.spec.objectives[i].query` parameter -of `KeptnEvaluationDefiniton` will be ignored and `.spec.query` of `KeptnMetric` will be use instead as a query to fetch the data. +To be able to use `KeptnMetric` as part of your evaluation, you need to add `keptn-metric` as your value +for `.spec.source` in `KeptnEvaluationDefiniton`. Further you need specify +the `.spec.objectives[i].name` of `KeptnEvaluationDefiniton` to the same value as it is stored in `.metadata.name` +of `KeptnMetric` resource. The `.spec.objectives[i].query` parameter +of `KeptnEvaluationDefiniton` will be ignored and `.spec.query` of `KeptnMetric` will be use instead as a query to fetch +the data. ## Install a dev build @@ -397,17 +446,19 @@ make build-deploy-dev-environment ``` - ## License Please find more information in the [LICENSE](LICENSE) file. ## Thanks to all the people who have contributed 💜 + + Made with [contrib.rocks](https://contrib.rocks). + diff --git a/assets/logo/README.md b/assets/logo/README.md index 07a10302ec..8f7da127c4 100644 --- a/assets/logo/README.md +++ b/assets/logo/README.md @@ -4,7 +4,6 @@ This directory contains the Keptn Lifecycle Controller logos in different format ## Logos -| Name | Logo | Format | -| ------------------------------------------ | -------------------------------------------------------------------- | ------ | -| `keptn_lifecycle_controller_logo.png` | | PNG | - +| Name | Logo | Format | +| ------------------------------------------ |----------------------------------------------------| ------ | +| `keptn_lifecycle_controller_logo.png` | ![KLT Logo](./keptn_lifecycle_controller_logo.png) | PNG | diff --git a/dashboards/grafana/README.md b/dashboards/grafana/README.md index 5cdd40019d..c4d304491b 100644 --- a/dashboards/grafana/README.md +++ b/dashboards/grafana/README.md @@ -3,22 +3,28 @@ This folder contains the Grafana dashboards for the Keptn Lifecycle Toolkit. ## Installing the dashboards -It is assumed, that there is a Grafana Instance available. In our provided examples, the dashboards are automatically provisioned. If you want to install the dashboards manually, you can use the following steps: -```sh +It is assumed, that there is a Grafana Instance available. In our provided examples, the dashboards are automatically +provisioned. If you want to install the dashboards manually, you can use the following steps: + +```shell # This defaults to http://localhost:3000, but can be changed by setting the GRAFANA_SCHEME, GRAFANA_URL and GRAFANA_PORT environment variable # The default credentials are admin:admin, but can be changed by setting the GRAFANA_USERNAME and GRAFANA_PASSWORD environment variable make install ``` ## Changing the dashboards -The dashboards can be changed in the Grafana UI. To export dashboards, export them using the share button and replace them in this folder. + +The dashboards can be changed in the Grafana UI. To export dashboards, export them using the share button and replace +them in this folder. ## Exporting the dashboards for the Examples + You can prepare the dashboards for the examples and import using the following command: -```sh +```shell make generate ``` - \ No newline at end of file + + diff --git a/dashboards/grafana/configmap/README.md b/dashboards/grafana/configmap/README.md index ad49f1f540..954e830a7d 100644 --- a/dashboards/grafana/configmap/README.md +++ b/dashboards/grafana/configmap/README.md @@ -1,8 +1,7 @@ -## Autogenerated Files - Do not change +# Autogenerated Files - Do not change -# Grafana Dashboards - ConfigMaps +## Grafana Dashboards - ConfigMaps -This files can be used to autoprovision Grafana dashboards in Kubernetes. - -More information: https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards +These files can be used to autoprovision Grafana dashboards in Kubernetes. +More information: diff --git a/dashboards/grafana/import/README.md b/dashboards/grafana/import/README.md index c416f8b0fd..9160dd2b4d 100644 --- a/dashboards/grafana/import/README.md +++ b/dashboards/grafana/import/README.md @@ -1,7 +1,7 @@ -## Autogenerated Files - Do not change +# Autogenerated Files - Do not change -# Grafana Dashboards - Import +## Grafana Dashboards - Import These dashboards can be imported into Grafana using the API. -To import them, use `make import` in the makefile of the `dashboards` directory. \ No newline at end of file +To import them, use `make import` in the makefile of the `dashboards` directory. diff --git a/docs/README.md b/docs/README.md index 99c1e855c9..717a2e3afa 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,13 +1,15 @@ -## Choosing where to add documentation +# Choosing where to add documentation -If the change to the docs needs to reflect the next version of KLT, please edit them here, following the instructions below. -For already existing documentation versions directly edit them from https://github.com/keptn-sandbox/lifecycle-toolkit-docs or from https://lifecycle.keptn.sh/. +If the change to the docs needs to reflect the next version of KLT, please edit them here, following the instructions +below. +For already existing documentation versions directly edit them +from or from . ## Adding documentation to the dev repo To verify your changes to the dev documentations you can use the makefile: -``` +```shell cd lifecycle-toolkit/docs make clone @@ -15,19 +17,20 @@ make build make server ``` -After the server is running on http://localhost:1314/docs-dev. +After the server is running on . Any modification in the docs folder will be reflected on the server under the dev revision. You can modify the content in realtime to verify the correct behaviour of links and such. ### Markdown linting + To check your markdown files for linter errors, run the following from the repo root: -``` +```shell make markdownlint ``` To use the auto-fix option, run: -``` +```shell make markdownlint-fix ``` diff --git a/docs/content/_index.md b/docs/content/_index.md index 62260d611a..e50c91f509 100644 --- a/docs/content/_index.md +++ b/docs/content/_index.md @@ -3,18 +3,20 @@ title = "Home" +++ -{{< blocks/cover title="Welcome to the Keptn Lifecycle Toolkit Documentation" image_anchor="top" height="half" color="primary" >}} + + +{{< blocks/cover title="Welcome to the Keptn Lifecycle Toolkit Documentation" image_anchor="top" height="half" color=" +primary" >}} {{< /blocks/cover >}} - {{% blocks/lead color="white" %}} [![Keptn Lifecycle Toolkit in a Nutshell](https://img.youtube.com/vi/K-cvnZ8EtGc/0.jpg)](https://www.youtube.com/watch?v=K-cvnZ8EtGc) {{% /blocks/lead %}} @@ -24,14 +26,16 @@ title = "Home" See Keptn [in Action](https://youtube.com/playlist?list=PL6i801Rjt9DbikPPILz38U1TLMrEjppzZ) {{% /blocks/feature %}} - -{{% blocks/feature icon="fab fa-github" title="Contributions welcome!" url="https://github.com/keptn/lifecycle-toolkit" %}} -We do a [Pull Request](https://github.com/keptn/lifecycle-toolkit/pulls) contributions workflow on **GitHub**. New users are always welcome! +{{% blocks/feature icon="fab fa-github" title="Contributions welcome!" url="https://github.com/keptn/lifecycle-toolkit" +%}} +We do a [Pull Request](https://github.com/keptn/lifecycle-toolkit/pulls) contributions workflow on **GitHub**. New users +are always welcome! {{% /blocks/feature %}} - {{% blocks/feature icon="fab fa-twitter" title="Follow us on Twitter!" url="https://twitter.com/keptnProject" %}} For announcement of latest features etc. {{% /blocks/feature %}} -{{< /blocks/section >}} \ No newline at end of file +{{< /blocks/section >}} + + diff --git a/docs/content/en/docs/concepts/apps/_index.md b/docs/content/en/docs/concepts/apps/_index.md index 1d50d552ac..8600d420db 100644 --- a/docs/content/en/docs/concepts/apps/_index.md +++ b/docs/content/en/docs/concepts/apps/_index.md @@ -8,23 +8,27 @@ hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html --- An App contains information about all workloads and checks associated with an application. -It will use the following structure for the specification of the pre/post deployment and pre/post evaluations checks that should be executed at app level: +It will use the following structure for the specification of the pre/post deployment and pre/post evaluations checks +that should be executed at app level: -``` +```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 kind: KeptnApp metadata: -name: podtato-head -namespace: podtato-kubectl + name: podtato-head + namespace: podtato-kubectl spec: -version: "1.3" -workloads: -- name: podtato-head-left-arm -version: 0.1.0 -- name: podtato-head-left-leg -postDeploymentTasks: -- post-deployment-hello -preDeploymentEvaluations: -- my-prometheus-definition + version: "1.3" + workloads: + - name: podtato-head-left-arm + version: 0.1.0 + - name: podtato-head-left-leg + version: 1.2.3 + postDeploymentTasks: + - post-deployment-hello + preDeploymentEvaluations: + - my-prometheus-definition ``` -While changes in the workload version will affect only workload checks, a change in the app version will also cause a new execution of app level checks. \ No newline at end of file + +While changes in the workload version will affect only workload checks, a change in the app version will also cause a +new execution of app level checks. diff --git a/docs/content/en/docs/concepts/evaluations/_index.md b/docs/content/en/docs/concepts/evaluations/_index.md index 3dfb3064e7..fd4887dfa9 100644 --- a/docs/content/en/docs/concepts/evaluations/_index.md +++ b/docs/content/en/docs/concepts/evaluations/_index.md @@ -9,6 +9,7 @@ hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html ### Keptn Evaluation Definition + A `KeptnEvaluationDefinition` is a CRD used to define evaluation tasks that can be run by the Keptn Lifecycle Toolkit as part of pre- and post-analysis phases of a workload or application. @@ -30,8 +31,8 @@ spec: evaluationTarget: >4 ``` - ### Keptn Evaluation Provider + A `KeptnEvaluationProvider` is a CRD used to define evaluation provider, which will provide data for the pre- and post-analysis phases of a workload or application. @@ -45,4 +46,4 @@ metadata: spec: targetServer: "http://prometheus-k8s.monitoring.svc.cluster.local:9090" secretName: prometheusLoginCredentials -``` \ No newline at end of file +``` diff --git a/docs/content/en/docs/concepts/metrics/_index.md b/docs/content/en/docs/concepts/metrics/_index.md index e3ebdfe170..66e4c441f5 100644 --- a/docs/content/en/docs/concepts/metrics/_index.md +++ b/docs/content/en/docs/concepts/metrics/_index.md @@ -8,7 +8,12 @@ hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html --- ### Keptn Metric -A `KeptnMetric` is a CRD representing a metric. The metric will be collected from the provider specified in the specs.provider.name field. The query is a string in the provider-specific query language, used to obtain a metric. Providing the metrics as CRD into a K8s cluster will facilitate the reusability of this data across multiple components. Furthermore, this allows using multiple observability platforms for different metrics. Please note, there is a limitation that `KeptnMetric` resource needs to be created only in `keptn-lifecycle-toolkit-system` namespace. + +A `KeptnMetric` is a CRD representing a metric. The metric will be collected from the provider specified in the +specs.provider.name field. The query is a string in the provider-specific query language, used to obtain a metric. +Providing the metrics as CRD into a K8s cluster will facilitate the reusability of this data across multiple components. +Furthermore, this allows using multiple observability platforms for different metrics. Please note, there is a +limitation that `KeptnMetric` resource needs to be created only in `keptn-lifecycle-toolkit-system` namespace. A `KeptnMetric` looks like the following: @@ -25,23 +30,24 @@ spec: fetchIntervalSeconds: 5 ``` -Keptn metrics can be exposed as OTel metrics via port `9999` of the KLT operator. To expose them, the env variable `EXPOSE_KEPTN_METRICS` in the operator manifest needs to be set to `true`. The default value of this variable is `true`. To access the metrics, use the following command: +Keptn metrics can be exposed as OTel metrics via port `9999` of the KLT operator. To expose them, the env +variable `EXPOSE_KEPTN_METRICS` in the operator manifest needs to be set to `true`. The default value of this variable +is `true`. To access the metrics, use the following command: -``` +```shell kubectl port-forward deployment/klc-controller-manager 9999 -n keptn-lifecycle-toolkit-system ``` and access the metrics via your browser with: -``` -http://localhost:9999/metrics -``` - +```http://localhost:9999/metrics``` #### Accessing Metrics via the Kubernetes Custom Metrics API -`KeptnMetrics` that are located in the `keptn-lifecycle-toolkit-system` namespace can also be retrieved via the Kubernetes Custom Metrics API. -This makes it possible to refer to these metrics via the Kubernetes *HorizontalPodAutoscaler*, as in the following example: +`KeptnMetrics` that are located in the `keptn-lifecycle-toolkit-system` namespace can also be retrieved via the +Kubernetes Custom Metrics API. +This makes it possible to refer to these metrics via the Kubernetes *HorizontalPodAutoscaler*, as in the following +example: ```yaml apiVersion: autoscaling/v2 @@ -57,17 +63,17 @@ spec: minReplicas: 1 maxReplicas: 10 metrics: - - type: Object - object: - metric: - name: keptnmetric-sample - describedObject: - apiVersion: metrics.keptn.sh/v1alpha1 - kind: KeptnMetric - name: keptnmetric-sample - target: - type: Value - value: "10" + - type: Object + object: + metric: + name: keptnmetric-sample + describedObject: + apiVersion: metrics.keptn.sh/v1alpha1 + kind: KeptnMetric + name: keptnmetric-sample + target: + type: Value + value: "10" ``` You can also use the `kubectl raw` command to retrieve the values of a `KeptnMetric`, as in the following example: @@ -102,7 +108,8 @@ $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/podtato-kube } ``` -You can also filter based on matching labels. So to e.g. retrieve all metrics that are labelled with `app=frontend`, you can use the following command: +You can also filter based on matching labels. So to e.g. retrieve all metrics that are labelled with `app=frontend`, you +can use the following command: ```shell $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/podtato-kubectl/keptnmetrics.metrics.sh/*/*?labelSelector=app%3Dfrontend" | jq . @@ -132,4 +139,4 @@ $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/podtato-kube } ] } -``` \ No newline at end of file +``` diff --git a/docs/content/en/docs/concepts/overview/klc-cert-manager/_index.md b/docs/content/en/docs/concepts/overview/klc-cert-manager/_index.md index 068d2d0399..b76c12d29a 100644 --- a/docs/content/en/docs/concepts/overview/klc-cert-manager/_index.md +++ b/docs/content/en/docs/concepts/overview/klc-cert-manager/_index.md @@ -8,13 +8,17 @@ hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html ### Keptn Cert Manager -The Lifecycle Toolkit includes a Mutating Webhook which requires TLS certificates to be mounted as a volume in its pod. In version 0.6.0 and later, the certificate creation -is handled automatically by the [klt-cert-manager](https://github.com/keptn/lifecycle-toolkit/blob/main/klt-cert-manager/README.md). +The Lifecycle Toolkit includes a Mutating Webhook which requires TLS certificates to be mounted as a volume in its pod. +In version 0.6.0 and later, the certificate creation +is handled automatically by +the [klt-cert-manager](https://github.com/keptn/lifecycle-toolkit/blob/main/klt-cert-manager/README.md). -The certificate is created as a secret in the `keptn-lifecycle-toolkit-system` namespace with a renewal threshold of 12 hours. -If it expires, the [klt-cert-manager](https://github.com/keptn/lifecycle-toolkit/blob/main/klt-cert-manager/README.md) renews it. +The certificate is created as a secret in the `keptn-lifecycle-toolkit-system` namespace with a renewal threshold of 12 +hours. +If it expires, the [klt-cert-manager](https://github.com/keptn/lifecycle-toolkit/blob/main/klt-cert-manager/README.md) +renews it. The Lifecycle Toolkit operator waits for a valid certificate to be ready. The certificate is mounted on an empty dir volume in the operator. -When a certificate is left over from an older version, the webhook or the operator may generate errors because of an invalid certificate. To solve this, delete the certificate and restart the operator. - +When a certificate is left over from an older version, the webhook or the operator may generate errors because of an +invalid certificate. To solve this, delete the certificate and restart the operator. diff --git a/docs/content/en/docs/concepts/tasks/_index.md b/docs/content/en/docs/concepts/tasks/_index.md index 4faa5e9cbf..f7236be9a9 100644 --- a/docs/content/en/docs/concepts/tasks/_index.md +++ b/docs/content/en/docs/concepts/tasks/_index.md @@ -12,7 +12,8 @@ hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html A `KeptnTaskDefinition` is a CRD used to define tasks that can be run by the Keptn Lifecycle Toolkit as part of pre- and post-deployment phases of a deployment. The task definition is a [Deno](https://deno.land/) script -Please, refer to the [function runtime](https://github.com/keptn/lifecycle-toolkit/tree/main/functions-runtime) for more information about the runtime. +Please, refer to the [function runtime](https://github.com/keptn/lifecycle-toolkit/tree/main/functions-runtime) for more +information about the runtime. In the future, we also intend to support other runtimes, especially running a container image directly. A task definition can be configured in three different ways: @@ -36,6 +37,7 @@ spec: ``` In the code section, it is possible to define a full-fletched Deno script. + ```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 kind: KeptnTaskDefinition @@ -49,12 +51,13 @@ spec: let data; let name; data = JSON.parse(text); - + name = data.name console.log("Hello, " + name + " new"); ``` -The runtime can also fetch the script on the fly from a remote webserver. For this, the CRD should look like the following: +The runtime can also fetch the script on the fly from a remote webserver. For this, the CRD should look like the +following: ```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 @@ -67,7 +70,9 @@ spec: url: ``` -An example is available [here](https://github.com/keptn-sandbox/lifecycle-toolkit-examples/blob/main/sample-app/version-1/app-pre-deploy.yaml). +An example is +available [here](https://github.com/keptn-sandbox/lifecycle-toolkit-examples/blob/main/sample-app/version-1/app-pre-deploy.yaml) +. Finally, `KeptnTaskDefinition` can build on top of other `KeptnTaskDefinition`s. This is a common use case where a general function can be re-used in multiple places with different parameters. @@ -96,16 +101,17 @@ A context environment variable is available via `Deno.env.get("CONTEXT")`. It ca let context = Deno.env.get("CONTEXT"); if (contextdata.objectType == "Application") { - let application_name = contextdata.appName; - let application_version = contextdata.appVersion; + let application_name = contextdata.appName; + let application_version = contextdata.appVersion; } if (contextdata.objectType == "Workload") { - let application_name = contextdata.appName; - let workload_name = contextdata.workloadName; - let workload_version = contextdata.workloadVersion; + let application_name = contextdata.appName; + let workload_name = contextdata.workloadName; + let workload_version = contextdata.workloadVersion; } ``` + ## Input Parameters and Secret Handling As you might have noticed, Task Definitions also have the possibility to use input parameters. @@ -163,4 +169,4 @@ spec: A Task is responsible for executing the TaskDefinition of a workload. The execution is done spawning a K8s Job to handle a single Task. -In its state, it keeps track of the current status of the K8s Job created. \ No newline at end of file +In its state, it keeps track of the current status of the K8s Job created. diff --git a/docs/content/en/docs/concepts/workloads/_index.md b/docs/content/en/docs/concepts/workloads/_index.md index 847a840b85..78b1cb837c 100644 --- a/docs/content/en/docs/concepts/workloads/_index.md +++ b/docs/content/en/docs/concepts/workloads/_index.md @@ -7,13 +7,18 @@ weight: 10 hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html --- -A Workload contains information about which tasks should be performed during the `preDeployment` as well as the `postDeployment` -phase of a deployment. In its state it keeps track of the currently active `Workload Instances`, which are responsible for doing those checks for +A Workload contains information about which tasks should be performed during the `preDeployment` as well as +the `postDeployment` +phase of a deployment. In its state it keeps track of the currently active `Workload Instances`, which are responsible +for doing those checks for a particular instance of a Deployment/StatefulSet/ReplicaSet (e.g. a Deployment of a certain version). ### Keptn Workload Instance -A Workload Instance is responsible for executing the pre- and post deployment checks of a workload. In its state, it keeps track of the current status of all checks, as well as the overall state of +A Workload Instance is responsible for executing the pre- and post deployment checks of a workload. In its state, it +keeps track of the current status of all checks, as well as the overall state of the Pre Deployment phase, which can be used by the scheduler to tell that a pod can be allowed to be placed on a node. -Workload Instances have a reference to the respective Deployment/StatefulSet/ReplicaSet, to check if it has reached the desired state. If it detects that the referenced object has reached -its desired state (e.g. all pods of a deployment are up and running), it will be able to tell that a `PostDeploymentCheck` can be triggered. +Workload Instances have a reference to the respective Deployment/StatefulSet/ReplicaSet, to check if it has reached the +desired state. If it detects that the referenced object has reached +its desired state (e.g. all pods of a deployment are up and running), it will be able to tell that +a `PostDeploymentCheck` can be triggered. diff --git a/docs/content/en/docs/crd-ref/_index.md b/docs/content/en/docs/crd-ref/_index.md index 0048673fa1..f4271ed1c0 100644 --- a/docs/content/en/docs/crd-ref/_index.md +++ b/docs/content/en/docs/crd-ref/_index.md @@ -5,8 +5,8 @@ weight: 100 hidechildren: false # this flag hides all sub-pages in the sidebar-multicard.html --- -This section provides comprehensive reference information -about the [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRDs) +This section provides comprehensive reference information about the +[Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) that are defined for the Keptn Lifecycle Toolkit. **NOTE: This section is under development. diff --git a/docs/content/en/docs/crd-ref/crd-template.md b/docs/content/en/docs/crd-ref/crd-template.md index 351eaaee48..d5dcec6336 100644 --- a/docs/content/en/docs/crd-ref/crd-template.md +++ b/docs/content/en/docs/crd-ref/crd-template.md @@ -13,17 +13,15 @@ Copy this template to create a new CRD reference page. 1. Populate the page with appropriate content ## Synopsis -``` -``` ## Parameters - + ## Usage - - + + ## Examples @@ -32,5 +30,3 @@ Copy this template to create a new CRD reference page. ## Differences between versions ## See also - - diff --git a/docs/content/en/docs/getting-started.md b/docs/content/en/docs/getting-started.md index 7798dbcdcf..ef7e788d8d 100644 --- a/docs/content/en/docs/getting-started.md +++ b/docs/content/en/docs/getting-started.md @@ -4,154 +4,194 @@ linktitle: Getting Started description: Learn how to use the Keptn Lifecycle Toolkit. weight: 15 cascade: - github_subdir: "docs/content/en/docs" - path_base_for_github_subdir: "/content/en/docs-dev" +github_subdir: "docs/content/en/docs" +path_base_for_github_subdir: "/content/en/docs-dev" --- -`kubectl create -f deployment.yaml` will "blindly" deploy workloads, but who needs to be notified that this deployment is about to happen? Is your infrastructure ready? Do your downstream services meet their SLOs? Can your infrastructure handle the deployment? +`kubectl create -f deployment.yaml` will "blindly" deploy workloads, but who needs to be notified that this deployment +is about to happen? Is your infrastructure ready? Do your downstream services meet their SLOs? Can your infrastructure +handle the deployment? -After the deployment, beyond the standard k8s probes, how can you integrate with other tooling to automatically test the deployment? How do you know the deployment is meeting its SLOs? Has the deployment caused any issues downstream? Who needs to know that the deployment was successful (or unsuccessful)? +After the deployment, beyond the standard k8s probes, how can you integrate with other tooling to automatically test the +deployment? How do you know the deployment is meeting its SLOs? Has the deployment caused any issues downstream? Who +needs to know that the deployment was successful (or unsuccessful)? -The Keptn Lifecycle Toolkit (KLT) "wraps" a standard Kubernetes deployment and provides both workload (single service) tests and SLO evaluations. Multiple workloads can also be logically grouped (and evaluated) as a single cohesive unit: a Keptn Application. In other words, an application is a collection of multiple workloads. +The Keptn Lifecycle Toolkit (KLT) "wraps" a standard Kubernetes deployment and provides both workload (single service) +tests and SLO evaluations. Multiple workloads can also be logically grouped (and evaluated) as a single cohesive unit: a +Keptn Application. In other words, an application is a collection of multiple workloads. -The Keptn Lifecycle Toolkit is a tool and vendor-neutral mechanism - it does not depend on particular GitOps tooling - ArgoCD, Flux, Gitlab or others - KLT works with them all. +The Keptn Lifecycle Toolkit is a tool and vendor-neutral mechanism - it does not depend on particular GitOps tooling - +ArgoCD, Flux, Gitlab or others - KLT works with them all. -The Keptn Lifecycle Toolkit emits signals at every stage (k8s events, OpenTelemetry metrics and traces) to ensure your deployments are observable. +The Keptn Lifecycle Toolkit emits signals at every stage (k8s events, OpenTelemetry metrics and traces) to ensure your +deployments are observable. Available steps (applicable to both workload and application entities): + * Pre-Deployment Tasks: e.g. checking for dependant services, checking if the cluster is ready for the deployment, etc. * Pre-Deployment Evaluations: e.g. evaluate metrics before your application gets deployed (e.g. layout of the cluster) * Post-Deployment Tasks: e.g. trigger a test, trigger a deployment to another cluster, etc. * Post-Deployment Evaluations: e.g. evaluate the deployment, evaluate the test results, etc. ## What you will learn here + * Use the Keptn Lifecycle Toolkit to control the deployment of your application * Connect the lifecycle-toolkit to Prometheus * Use pre-deployment tasks to check if a dependency is met before deploying a workload * Use post-deployment tasks on an application level to send a notification ## Prerequisites + * A Kubernetes cluster >= Kubernetes 1.24 - * If you don't have one, we recommend [Kubernetes-in-Docker(KinD)](https://kind.sigs.k8s.io/docs/user/quick-start/) to set up your local development environment + * If you don't have one, we recommend [Kubernetes-in-Docker(KinD)](https://kind.sigs.k8s.io/docs/user/quick-start/) + to set up your local development environment * kubectl installed on your system - * See (https://kubernetes.io/docs/tasks/tools/) for more information + * See () for more information ## Check Kubernetes Version Run the following and ensure both client and server versions are greater than or equal to v1.24. -``` +```shell kubectl version --short ``` -The output should look like this. In this example, both client and server are at v1.24.0 so the Keptn Lifecycle Toolkit will work. +The output should look like this. In this example, both client and server are at v1.24.0 so the Keptn Lifecycle Toolkit +will work. {{% readfile file="./snippets/tasks/k8s_version_output.md" markdown="true" %}} ## Install the Keptn Lifecycle Toolkit + {{% readfile file="./snippets/tasks/install.md" markdown="true" %}} ## Check out the Getting Started Repository -For the further progress of this guide, we need a sample application as well as some helpers which make it easier for your to set up your environment. These things can be found in our Getting Started repository which can be checked out as follows: -```console +For the further progress of this guide, we need a sample application as well as some helpers which make it easier for +your to set up your environment. These things can be found in our Getting Started repository which can be checked out as +follows: + +```shell git clone https://github.com/keptn-sandbox/lifecycle-toolkit-examples.git cd lifecycle-toolkit-examples ``` ## Install the required observability features -The Keptn Lifecycle Toolkit emits OpenTelemetry data as standard but the toolkit does not come pre-bundled with Observability backend tooling. This is deliberate as it provides flexibility for you to bring your own Observability backend which consumes this emitted data. + +The Keptn Lifecycle Toolkit emits OpenTelemetry data as standard but the toolkit does not come pre-bundled with +Observability backend tooling. This is deliberate as it provides flexibility for you to bring your own Observability +backend which consumes this emitted data. In order to use the observability features of the lifecycle toolkit, we need a monitoring and tracing backend. -In this guide, we will use [Prometheus](https://prometheus.io/) for Metrics, [Jaeger](https://jaegertracing.io) for Traces and [Grafana](https://github.com/grafana/) for Dashboarding. +In this guide, we will use [Prometheus](https://prometheus.io/) for Metrics, [Jaeger](https://jaegertracing.io) for +Traces and [Grafana](https://github.com/grafana/) for Dashboarding. -``` +```shell make install-observability make restart-lifecycle-toolkit ``` ## The Demo Application -For this demonstration, we use a slightly modified version of [the PodTatoHead](https://github.com/podtato-head/podtato-head). + +For this demonstration, we use a slightly modified version +of [the PodTatoHead](https://github.com/podtato-head/podtato-head). ![img.png](assets/podtatohead.png) -Over time, we will evolve this application from a simple manifest to a Keptn-managed application. We will install it first with kubectl and add pre- as well as post-deployment tasks. For this, we will check if the entry service is available before the other ones get scheduled. Afterward, we will add evaluations to ensure that our infrastructure is in a good shape before we deploy the application. Finally, we will evolve to a GitOps driven deployment and will notify an external webhook service when the deployment has finished. +Over time, we will evolve this application from a simple manifest to a Keptn-managed application. We will install it +first with kubectl and add pre- as well as post-deployment tasks. For this, we will check if the entry service is +available before the other ones get scheduled. Afterward, we will add evaluations to ensure that our infrastructure is +in a good shape before we deploy the application. Finally, we will evolve to a GitOps driven deployment and will notify +an external webhook service when the deployment has finished. ## Install the Demo Application (Version 1) -In the first version of the Demo application, the Keptn Lifecycle Toolkit evaluates metrics provided by prometheus and checks if a specified amount of CPUs is available before deploying the application + +In the first version of the Demo application, the Keptn Lifecycle Toolkit evaluates metrics provided by prometheus and +checks if a specified amount of CPUs is available before deploying the application To install it, simply apply the manifest: + ```shell make deploy-version-1 ``` You can watch the progress of the deployment as follows: -
-Watch workload state -When the Lifecycle Toolkit detects workload labels ("app.kubernetes.io/name" and "keptn.sh/workload") on a resource, a KeptnWorkloadInstance (kwi) resource will be created. Using this resource you can watch the progress of the deployment. + +### Watch workload state + +When the Lifecycle Toolkit detects workload labels ("app.kubernetes.io/name" and "keptn.sh/workload") on a resource, a +KeptnWorkloadInstance (kwi) resource will be created. Using this resource you can watch the progress of the deployment. ```shell kubectl get keptnworkloadinstances -n podtato-kubectl ``` -This will show the current status of the Workloads and in which phase they are at the moment. You can get more detailed information about the workloads by describing one of the resources: +This will show the current status of the Workloads and in which phase they are at the moment. You can get more detailed +information about the workloads by describing one of the resources: ```shell kubectl describe keptnworkloadinstances podtato-head-podtato-head-entry -n podtato-kubectl ``` Note that there are more detailed information in the event stream of the object. -
-
-Watch application state -Although you didn't specify an application in your manifest, the Lifecycle Toolkit assumed that this is a single-service application and created an ApplicationVersion (kav) resource for you. +### Watch application state + +Although you didn't specify an application in your manifest, the Lifecycle Toolkit assumed that this is a single-service +application and created an ApplicationVersion (kav) resource for you. Using `kubectl get keptnappversions -n podtato-kubectl` you can see state of these resources. -
-
-Watch pods +### Watch pods + Obviously, you should see that the pods are starting normally. You can watch the state of the pods using: ```shell kubectl get pods -n podtato-kubectl ``` -
-Furthermore, you can port-forward the podtato-head service to your local machine and access the application via your browser: +Furthermore, you can port-forward the podtato-head service to your local machine and access the application via your +browser: ```shell make port-forward-grafana ``` - -In your browser (http://localhost:3000, Log in with the user 'admin' and the password 'admin'), you can open the Dashboard `Keptn Applications` and see the current state of the application which should be similar to the following: + +In your browser (, Log in with the user 'admin' and the password 'admin'), you can open the +Dashboard `Keptn Applications` and see the current state of the application which should be similar to the following: ![grafana.png](assets/grafana.png) In this screen you get the following information: + * Successful/Failed Deployments * Time between Deployments * Deployment Time per Version * The link to the Trace of the deployment -After some time (~60 seconds), you should see one more failed deployment in your dashboard. You can click on the link to the trace and see the reason for the failure: +After some time (~60 seconds), you should see one more failed deployment in your dashboard. You can click on the link to +the trace and see the reason for the failure: ![trace-failed.png](assets/trace-failed.png) -In this case, we see the name of the failed pre-deployment evaluation and the reason for the failure. In this case, the minimum amount of CPUs is not met. This is a problem we can solve by changing the treshold in the evaluation file. +In this case, we see the name of the failed pre-deployment evaluation and the reason for the failure. In this case, the +minimum amount of CPUs is not met. This is a problem we can solve by changing the treshold in the evaluation file. ## Install the Demo Application (Version 2) -To achieve this, we changed the operator in the evaluation file (sample-app/version-2/app-pre-deploy-eval) from `<` to `>` and applied the new manifest: + +To achieve this, we changed the operator in the evaluation file (sample-app/version-2/app-pre-deploy-eval) from `<` +to `>` and applied the new manifest: ```shell kubectl apply -f sample-app/version-2 ``` -After this, you can inspect the new state of the application using the same commands as before. You should see that the deployment is now successful and that the trace is also updated. You should also see in the Grafana Dashboards that the deployment was successful. +After this, you can inspect the new state of the application using the same commands as before. You should see that the +deployment is now successful and that the trace is also updated. You should also see in the Grafana Dashboards that the +deployment was successful. Congratulations! You successfully deployed the first application using the Keptn Lifecycle Toolkit! diff --git a/docs/content/en/docs/snippets/tasks/install.md b/docs/content/en/docs/snippets/tasks/install.md index 5a833e555a..1ddca960b0 100644 --- a/docs/content/en/docs/snippets/tasks/install.md +++ b/docs/content/en/docs/snippets/tasks/install.md @@ -1,12 +1,16 @@ +# Installation Instructions ## Install version 0.6.0 and above In version 0.6.0 and later, you can install the Lifecycle Toolkit using the current release manifest: + -``` + +```shell kubectl apply -f https://github.com/keptn/lifecycle-toolkit/releases/download/v0.5.0/manifest.yaml kubectl wait --for=condition=Available deployment/klc-controller-manager -n keptn-lifecycle-toolkit-system --timeout=120s ``` + The Lifecycle Toolkit and its dependencies are now installed and ready to use. @@ -15,17 +19,14 @@ The Lifecycle Toolkit and its dependencies are now installed and ready to use. You must first install *cert-manager* with the following commands: - -``` +```shell kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml kubectl wait --for=condition=Available deployment/cert-manager-webhook -n cert-manager --timeout=60s ``` -After that, you can install the Lifecycle Toolkit with: +After that, you can install the Lifecycle Toolkit `` with: -``` +```shell kubectl apply -f https://github.com/keptn/lifecycle-toolkit/releases/download//manifest.yaml kubectl wait --for=condition=Available deployment/klc-controller-manager -n keptn-lifecycle-toolkit-system --timeout=120s -``` \ No newline at end of file +``` diff --git a/docs/content/en/docs/snippets/tasks/k8s_version_output.md b/docs/content/en/docs/snippets/tasks/k8s_version_output.md index 09a7f9b8e3..eea156d127 100644 --- a/docs/content/en/docs/snippets/tasks/k8s_version_output.md +++ b/docs/content/en/docs/snippets/tasks/k8s_version_output.md @@ -1,7 +1,9 @@ -``` + +```shell $ kubectl version --short Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Client Version: v1.24.0 Kustomize Version: v4.5.4 Server Version: v1.24.0 -``` \ No newline at end of file +``` + diff --git a/docs/content/en/docs/tasks/add-app-awareness/index.md b/docs/content/en/docs/tasks/add-app-awareness/index.md index a8b2fc6fbb..2534e3f67b 100644 --- a/docs/content/en/docs/tasks/add-app-awareness/index.md +++ b/docs/content/en/docs/tasks/add-app-awareness/index.md @@ -1,26 +1,37 @@ -## Add Application Awareness -In the previous step, we installed the demo application without any application awareness. This means that the Lifecycle Toolkit assumed that every workload is a single-service application at the moment and created the Application resources for you. +# Add Application Awareness + +In the previous step, we installed the demo application without any application awareness. This means that the Lifecycle +Toolkit assumed that every workload is a single-service application at the moment and created the Application resources +for you. + +To get the overall state of an application, we need a grouping of workloads, called KeptnApp in the Lifecycle Toolkit. +To get this working, we need to modify our application manifest with two things: -To get the overall state of an application, we need a grouping of workloads, called KeptnApp in the Lifecycle Toolkit. To get this working, we need to modify our application manifest with two things: * Add an "app.kubernetes.io/part-of" or "keptn.sh/app" label to the deployment * Create an application resource -### Preparing the Manifest and create an App resource +## Preparing the Manifest and create an App resource + --- -**TL;DR** -You can also used the prepared manifest and apply it directly using: `kubectl apply -k sample-app/version-2/` and proceed [here](#watch-application-behavior). +### TL;DR + +You can also used the prepared manifest and apply it directly using: `kubectl apply -k sample-app/version-2/` and +proceed [here](#watch-application-behavior). --- -**Otherwise** + +### Otherwise Create a temporary directory and copy the base manifest there: + ```shell mkdir ./my-deployment cp demo-application/base/manifest.yml ./my-deployment ``` Now, open the manifest in your favorite editor and add the following label to the deployments, e.g.: + ```yaml --- apiVersion: apps/v1 @@ -46,7 +57,7 @@ spec: terminationGracePeriodSeconds: 5 containers: - name: server - image: ghcr.io/podtato-head/right-leg:0.2.7 + image: ghcr.io/podtato-head/right-leg:0.2.7 imagePullPolicy: Always ports: - containerPort: 9000 @@ -58,6 +69,7 @@ spec: Now, update the version of the workloads in the manifest to `0.2.0`. Finally, create an application resource (app.yaml) and save it in the directory as well: + ```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 kind: KeptnApp @@ -82,22 +94,26 @@ spec: ``` Now, apply the manifests: + ```shell kubectl apply -f ./my-deployment/. ``` -### Watch Application behavior -Now, your application gets deployed in an application aware way. This means that pre-deployment tasks and evaluations would be executed if you would have any. The same would happen for post-deployment tasks and evaluations after the last workload has been deployed successfully. +## Watch Application behavior + +Now, your application gets deployed in an application aware way. This means that pre-deployment tasks and evaluations +would be executed if you would have any. The same would happen for post-deployment tasks and evaluations after the last +workload has been deployed successfully. -
-Watch application state Now that you defined your application, you could watch the state of the whole application using: ```shell kubectl get keptnappversions -n podtato-kubectl` ``` -
-You should see that the application is in a progressing state as long as the workloads (`kubectl get kwi`) are progressing. After the last application has been deployed, and post-deployment tasks and evaluations are finished (there are none at this point), the state should switch to completed. +You should see that the application is in a progressing state as long as the workloads (`kubectl get kwi`) are +progressing. After the last application has been deployed, and post-deployment tasks and evaluations are finished (there +are none at this point), the state should switch to completed. -Now, we have deployed an application and are able to get the total state of the application state. Metrics and traces get exported and now we're ready to dive deeper in the world of Pre- and Post-Deployment Tasks. \ No newline at end of file +Now, we have deployed an application and are able to get the total state of the application state. Metrics and traces +get exported and now we're ready to dive deeper in the world of Pre- and Post-Deployment Tasks. diff --git a/docs/content/en/docs/tasks/implement-slack-notification/_index.md b/docs/content/en/docs/tasks/implement-slack-notification/_index.md index cb0f07d892..c77f37c786 100644 --- a/docs/content/en/docs/tasks/implement-slack-notification/_index.md +++ b/docs/content/en/docs/tasks/implement-slack-notification/_index.md @@ -7,12 +7,10 @@ weight: 24 hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html --- -# Overview -This section describes how to **prepare and enable** post-deployment tasks to send notifications to slack using webhooks. - ## Create Slack Webhook -At first, create an incoming slack webhook. Necessary information is available in the [slack api page](https://api.slack.com/messaging/webhooks). +At first, create an incoming slack webhook. +Necessary information is available in the [slack api page](https://api.slack.com/messaging/webhooks). Once you create the webhook, you will get a URL similar to below example. `https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX` @@ -22,18 +20,20 @@ Once you create the webhook, you will get a URL similar to below example. ## Create slack-secret Create a `slack-secret.yaml` definition using the following command. -This will create a kubernetes secret named `slack-secret.yaml` in the `examples/sample-app/base` directory. Before running -this command change your current directory into `examples/sample-app`. +This will create a kubernetes secret named `slack-secret.yaml` in the `examples/sample-app/base` directory. +Before running this command change your current directory into `examples/sample-app`. -```bash +```shell kubectl create secret generic slack-secret --from-literal=SECURE_DATA='{"slack_hook":,"text":"Deployed PodTatoHead Application"}' -n podtato-kubectl -oyaml --dry-run=client > base/slack-secret.yaml ``` + ## Enable post deployment task To enable Slack notification add `post-deployment-notification` in as a postDeploymentTasks in the -[examples/sample-app/base/app.yaml](https://github.com/keptn/lifecycle-toolkit/blob/main/examples/sample-app/base/app.yaml) file as shown below. +[examples/sample-app/base/app.yaml](https://github.com/keptn/lifecycle-toolkit/blob/main/examples/sample-app/base/app.yaml) +file as shown below. ```yaml postDeploymentTasks: - post-deployment-notification -``` \ No newline at end of file +``` diff --git a/docs/content/en/docs/tasks/install/_index.md b/docs/content/en/docs/tasks/install/_index.md index 134a41f3fa..653c0c69c6 100644 --- a/docs/content/en/docs/tasks/install/_index.md +++ b/docs/content/en/docs/tasks/install/_index.md @@ -7,4 +7,4 @@ weight: 15 hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html --- -{{% readfile file="../../snippets/tasks/install.md" markdown="true" %}} \ No newline at end of file +{{% readfile file="../../snippets/tasks/install.md" markdown="true" %}} diff --git a/docs/content/en/docs/tasks/restart-application-deployment/_index.md b/docs/content/en/docs/tasks/restart-application-deployment/_index.md index 513ef2a875..2f005fe091 100644 --- a/docs/content/en/docs/tasks/restart-application-deployment/_index.md +++ b/docs/content/en/docs/tasks/restart-application-deployment/_index.md @@ -9,17 +9,23 @@ hidechildren: true # this flag hides all sub-pages in the sidebar-multicard.html ## Restart an Application Deployment -During the deployment of a `KeptnApp`, it might be that the deployment fails due to an unsuccessful pre-deployment evaluation or pre-deployment task. -This could happen because of, e.g., a misconfigured target value of a `KeptnEvaluationDefinition`, or a wrong URL being checked in a pre deployment check. - -To retry a `KeptnApp` deployment without incrementing the version of the `KeptnApp`, we introduced the concept of **revisions** for a `KeptnAppVersion`. This means that -whenever the spec of a `KeptnApp` changes, even though the version stays the same, the KLT Operator will create a new revision of the `KeptnAppVersion` referring to the `KeptnApp`. - -This way, when a `KeptnApp` failed due to a misconfigured pre-deployment check, you can first fix the configuration of the `KeptnTaskDefinition`/`KeptnEvaluationDefinition`, then +During the deployment of a `KeptnApp`, it might be that the deployment fails due to an unsuccessful pre-deployment +evaluation or pre-deployment task. +This could happen because of, e.g., a misconfigured target value of a `KeptnEvaluationDefinition`, or a wrong URL being +checked in a pre deployment check. + +To retry a `KeptnApp` deployment without incrementing the version of the `KeptnApp`, we introduced the concept of ** +revisions** for a `KeptnAppVersion`. This means that +whenever the spec of a `KeptnApp` changes, even though the version stays the same, the KLT Operator will create a new +revision of the `KeptnAppVersion` referring to the `KeptnApp`. + +This way, when a `KeptnApp` failed due to a misconfigured pre-deployment check, you can first fix the configuration of +the `KeptnTaskDefinition`/`KeptnEvaluationDefinition`, then increase the value of `spec.revision` of the `KeptnApp` and finally apply the updated `KeptnApp` manifest. This will result in a restart of the `KeptnApp`. Afterwards, all related `KeptnWorkloadInstances` will automatically refer to the newly -created revision of the `KeptnAppVersion` to determine whether they are allowed to enter their respective deployment phase. +created revision of the `KeptnAppVersion` to determine whether they are allowed to enter their respective deployment +phase. To illustrate this, let's have a look at the following example: @@ -88,7 +94,8 @@ spec: value: "9000" ``` -In this example, the `KeptnApp` executes a pre-deployment check which clearly fails due to the `pre-deployment-check` task, and will therefore not be able to proceed with the deployment. +In this example, the `KeptnApp` executes a pre-deployment check which clearly fails due to the `pre-deployment-check` +task, and will therefore not be able to proceed with the deployment. After applying this manifest, you can inspect the status of the created `KeptnAppVersion`: @@ -98,17 +105,22 @@ NAME APPNAME VERSION PHASE podtato-head-0.1.1-1 podtato-head 0.1.1 AppPreDeployTasks ``` -You will notice that the `KeptnAppVersion` will stay in the `AppPreDeployTasks` phase for a while, due to the pre-check trying to run until a certain failure threshold is reached. -Eventually, you will find the `KeptnAppVersion`'s `PredeploymentPhase` to be in a `Failed` state, with the remaining phases being `Deprecated`. +You will notice that the `KeptnAppVersion` will stay in the `AppPreDeployTasks` phase for a while, due to the pre-check +trying to run until a certain failure threshold is reached. +Eventually, you will find the `KeptnAppVersion`'s `PredeploymentPhase` to be in a `Failed` state, with the remaining +phases being `Deprecated`. + ```shell $ kubectl get keptnappversions.lifecycle.keptn.sh -n restartable-apps -owide NAME APPNAME VERSION PHASE PREDEPLOYMENTSTATUS PREDEPLOYMENTEVALUATIONSTATUS WORKLOADOVERALLSTATUS POSTDEPLOYMENTSTATUS POSTDEPLOYMENTEVALUATIONSTATUS podtato-head-0.1.1-1 podtato-head 0.1.1 AppPreDeployTasks Failed Deprecated Deprecated Deprecated Deprecated ``` + Now, to fix the deployment of this application, we first need to fix the task that has failed earlier. -To do so, edit the `pre-deployment-check` `KeptnTaskDefinition` to the following (`kubectl -n restartable-apps edit keptntaskdefinitions.lifecycle.keptn.sh pre-deployment-check`): +To do so, edit the `pre-deployment-check` `KeptnTaskDefinition` to the +following (`kubectl -n restartable-apps edit keptntaskdefinitions.lifecycle.keptn.sh pre-deployment-check`): ```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 @@ -123,7 +135,8 @@ spec: console.error("Success") ``` -After we have done that, we can restart the deployment of our `KeptnApplication` by incrementing the `spec.revision` field by one +After we have done that, we can restart the deployment of our `KeptnApplication` by incrementing the `spec.revision` +field by one (`kubectl -n restartable-apps edit keptnapps.lifecycle.keptn.sh podtato-head`): ```yaml @@ -142,7 +155,7 @@ spec: - pre-deployment-check ``` -After those changes have been made, you will notice a new revision of the `podtato-head` `KeptnAppVersion`: +After those changes have been made, you will notice a new revision of the `podtato-head` `KeptnAppVersion`: ```shell $ kubectl get keptnappversions.lifecycle.keptn.sh -n restartable-apps @@ -151,16 +164,21 @@ podtato-head-0.1.1-1 podtato-head 0.1.1 AppPreDeployTasks podtato-head-0.1.1-2 podtato-head 0.1.1 AppDeploy ``` -As you will see, the newly created revision `podtato-head-0.1.1-2` has made it beyond the pre-deployment check phase and has reached its `AppDeployPhase`. +As you will see, the newly created revision `podtato-head-0.1.1-2` has made it beyond the pre-deployment check phase and +has reached its `AppDeployPhase`. -You can also verify the execution of the `pre-deployment-check` by retrieving the list of `KeptnTasks` in the `restartable-apps` namespace: +You can also verify the execution of the `pre-deployment-check` by retrieving the list of `KeptnTasks` in +the `restartable-apps` namespace: + ```shell $ kubectl get keptntasks.lifecycle.keptn.sh -n restartable-apps NAME APPNAME APPVERSION WORKLOADNAME WORKLOADVERSION JOB NAME STATUS pre-pre-deployment-check-49827 podtato-head 0.1.1 klc-pre-pre-deployment-check--77601 Failed pre-pre-deployment-check-65056 podtato-head 0.1.1 klc-pre-pre-deployment-check--57313 Succeeded ``` + -You will notice that for both the `KeptnAppVersions` and `KeptnTasks` the previous failed instances are still available, as this might be useful historical data to keep track of -what went wrong during earlier deployment attempts. \ No newline at end of file +You will notice that for both the `KeptnAppVersions` and `KeptnTasks` the previous failed instances are still available, +as this might be useful historical data to keep track of +what went wrong during earlier deployment attempts. diff --git a/docs/content/en/docs/tasks/write-tasks/_index.md b/docs/content/en/docs/tasks/write-tasks/_index.md index 25bfe22684..6fdc7e5e75 100644 --- a/docs/content/en/docs/tasks/write-tasks/_index.md +++ b/docs/content/en/docs/tasks/write-tasks/_index.md @@ -35,7 +35,8 @@ spec: In the code section, it is possible to define a full-fletched Deno script. -The runtime can also fetch the script on the fly from a remote webserver. For this, the CRD should look like the following: +The runtime can also fetch the script on the fly from a remote webserver. For this, the CRD should look like the +following: ```yaml apiVersion: lifecycle.keptn.sh/v1alpha2 @@ -72,4 +73,5 @@ The Lifecycle Toolkit passes the values defined inside the `map` field as a JSON At the moment, multi-level maps are not supported. The JSON object can be read through the environment variable `DATA` using `Deno.env.get("DATA");`. Kubernetes secrets can also be passed to the function using the `secureParameters` field. -Here, the `secret` value is the K8s secret name that will be mounted into the runtime and made available to the function via the environment variable `SECURE_DATA`. +Here, the `secret` value is the K8s secret name that will be mounted into the runtime and made available to the function +via the environment variable `SECURE_DATA`. diff --git a/docs/markdownlint-rules.yaml b/docs/markdownlint-rules.yaml index a9e13b6d5d..bd7375922e 100644 --- a/docs/markdownlint-rules.yaml +++ b/docs/markdownlint-rules.yaml @@ -1,3 +1,4 @@ line-length: line_length: 120 tables: false + code_blocks: false diff --git a/examples/sample-app/README.md b/examples/sample-app/README.md index eccd9c0ab6..3bd9144ea9 100644 --- a/examples/sample-app/README.md +++ b/examples/sample-app/README.md @@ -5,26 +5,28 @@ This example should demonstrate the capabilities of the lifecycle toolkit as ill ![img.png](assets/big-picture.png) ## PostDeployment Slack Notification + This section describes how to **prepare and enable** post-deployment tasks to send notifications to slack using webhooks. -**Create Slack Webhook** +### Create Slack Webhook In the first step, create an incoming slack webhook. Necessary information is available in the [slack api page](https://api.slack.com/messaging/webhooks). Once you create the webhook, you will get a URL similar to below example. -`https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX` +`https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX` `T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX` is the secret part of the webhook which we would need in the next step. -**Create slack-secret** +### Create slack-secret -Create a `slack-secret.yaml` definition using the following command. +Create a `slack-secret.yaml` definition using the following command. This will create a kubernetes secret named `slack-secret.yaml` in the [base](./base) directory. -```bash +```shell kubectl create secret generic slack-secret --from-literal=SECURE_DATA='{"slack_hook":,"text":"Deployed PodTatoHead Application"}' -n podtato-kubectl -oyaml --dry-run=client > base/slack-secret.yaml ``` -**Enable post deployment task** + +### Enable post deployment task To enable Slack notification add `post-deployment-notification` in as a postDeploymentTasks in the [app.yaml](base/app.yaml) file as shown below. @@ -35,34 +37,41 @@ To enable Slack notification add `post-deployment-notification` in as a postDepl ``` ## Deploy the Observability Part and Keptn-lifecycle-toolkit -> make install + +```make install``` ## Port-Forward Grafana -> make port-forward-grafana + +```make port-forward-grafana``` If you want to port-forward to a different port, please execute: -> make port-forward-grafana GRAFANA_PORT_FORWARD= +```make port-forward-grafana GRAFANA_PORT_FORWARD=``` ## Deploy Version 1 of the PodTatoHead -> make deploy-version-1 + +```make deploy-version-1``` Now watch the progress on the cluster -> kubectl get keptnworkloadinstances -> kubectl get keptnappversions +```kubectl get keptnworkloadinstances``` +```kubectl get keptnappversions``` -You could also open up a browser and watch the progress in Jaeger. You can find the Context ID in the "TraceId" Field of the KeptnAppVersion +You could also open up a browser and watch the progress in Jaeger. You can find the Context ID in the "TraceId" Field of +the KeptnAppVersion The deployment should fail because of too few cpu resources ## Deploy Version 2 of the PodTatoHead -> make deploy-version-2 + +```make deploy-version-2``` * Watch the progress of the deployments * After some time, you should see that everything is successful ## Deploy Version 3 -> make deploy-version-3 + +```make deploy-version-3``` * This should only change one service, you can see that only this changed in the trace - \ No newline at end of file + + diff --git a/examples/support/argo/README.md b/examples/support/argo/README.md index 15685b6374..3c04a435bd 100644 --- a/examples/support/argo/README.md +++ b/examples/support/argo/README.md @@ -1,21 +1,33 @@ # Deploying an application using the Keptn Lifecycle Controller and ArgoCD -In this example, we will show you how to install our sample application *podtatohead* using the Keptn Lifecycle Controller and [ArgoCD](https://argo-cd.readthedocs.io/en/stable/). +In this example, we will show you how to install our sample application *podtatohead* using the Keptn Lifecycle +Controller and [ArgoCD](https://argo-cd.readthedocs.io/en/stable/). + +## TL;DR -# TL;DR * You can install ArgoCD and Keptn-lifecycle-toolkit using: `make install` * Install argo CLI according to the instructions [here](https://argo-cd.readthedocs.io/en/stable/cli_installation/) * Afterward, you can fetch the secret for the ArgoCD CLI using: `make argo-get-password` * Then you can port-forward the ArgoUI using: `make port-forward-argocd` * Alternatively, you can access Argo using the CLI, configure it using `make argo-configure-cli` * Deploy the PodTatoHead Demo Application: `make argo-install-podtatohead` -* Watch the progress on your ArgoUI: `http://localhost:8080`. Use the `admin` user and the password from `make argo-get-password`. +* Watch the progress on your ArgoUI: `http://localhost:8080`. Use the `admin` user and the password + from `make argo-get-password`. + +## Prerequisites -## Prerequisites: -This tutorial assumes, that you already installed the Keptn Lifecycle Controller (see https://github.com/keptn/lifecycle-toolkit). The installation instructions can be found [here](https://github.com/keptn/lifecycle-toolkit#deploy-the-latest-release). Furthermore, you have to install ArgoCD, as in the following their [installation instructions](https://argo-cd.readthedocs.io/en/stable/getting_started/). +This tutorial assumes, that you already installed the Keptn Lifecycle Controller ( +see ). The installation instructions can be +found [here](https://github.com/keptn/lifecycle-toolkit#deploy-the-latest-release). Furthermore, you have to install +ArgoCD, as in the following their [installation instructions](https://argo-cd.readthedocs.io/en/stable/getting_started/) +. ### Install ArgoCD -If you don't have an already existing installation of ArgoCD, you can [install](https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.14/manifests/install.yaml) it using the following commands: + +If you don't have an already existing installation of ArgoCD, you +can [install](https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.14/manifests/install.yaml) it using the following +commands: + ```shell kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.14/manifests/install.yaml @@ -24,40 +36,54 @@ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2 With these commands, ArgoCD will be installed in the `argocd` namespace. After that, you can find the password for ArgoCD using the following command: + ```shell kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ``` ### Port-Forward ArgoCD and access the UI + To access the ArgoCD UI, you can port-forward the ArgoCD service using the following command: + ```shell kubectl port-forward svc/argocd-server -n argocd 8080:443 ``` -Then you can access the UI using http://localhost:8080. + +Then you can access the UI using . ## Installing the Demo Application -To install the demo application, you can use the following command (apply [this manifest](https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/examples/support/argo/config/app.yaml)): + +To install the demo application, you can use the following command ( +apply [this manifest](https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/examples/support/argo/config/app.yaml)): + ```shell kubectl apply -f https://raw.githubusercontent.com/keptn-sandbox/lifecycle-toolkit-examples/main/support/argo/config/app.yaml ``` -You will see that the application will be deployed using ArgoCD. You can watch the progress on the ArgoCD UI and should see the following: +You will see that the application will be deployed using ArgoCD. You can watch the progress on the ArgoCD UI and should +see the following: ![img.png](assets/argo-screen.png) In the meanwhile you can watch the progress of the deployment using: -> `kubectl get pods -n podtato-kubectl` - * See that the pods are pending until the pre-deployment tasks have passed - * Pre-Deployment Tasks are started - * Pods get scheduled +```kubectl get pods -n podtato-kubectl``` + +* See that the pods are pending until the pre-deployment tasks have passed +* Pre-Deployment Tasks are started +* Pods get scheduled + +```kubectl get keptnworkloadinstances -n podtato-kubectl``` + +* Get the current status of the workloads +* See in which phase your workload deployments are at the moment + +```kubectl get keptnappversions -n podtato-kubectl``` + +*Get the current status of the application -> `kubectl get keptnworkloadinstances -n podtato-kubectl` - * Get the current status of the workloads - * See in which phase your workload deployments are at the moment - -> `kubectl get keptnappversions -n podtato-kubectl` - * Get the current status of the application - * See in which phase your application deployment is at the moment +* See in which phase your application deployment is at the moment -After some time all resources should be in a succeeded state. In the Argo-UI you will see that the application is in sync. +After some time all resources should be in a succeeded state. In the Argo-UI you will see that the application is in +sync. + diff --git a/examples/support/observability/README.md b/examples/support/observability/README.md index b149046d58..b6f3e88824 100644 --- a/examples/support/observability/README.md +++ b/examples/support/observability/README.md @@ -1,28 +1,37 @@ # Sending Traces and Metrics to the OpenTelemetry Collector -In this example, we will show you an example configuration for enabling the operator to send OpenTelemetry traces and metrics to the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector). -The Collector will then be used to forward the gathered data to [Jaeger](https://www.jaegertracing.io) and [Prometheus](https://prometheus.io). +In this example, we will show you an example configuration for enabling the operator to send OpenTelemetry traces and +metrics to the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector). +The Collector will then be used to forward the gathered data to [Jaeger](https://www.jaegertracing.io) +and [Prometheus](https://prometheus.io). The application deployed uses an example of pre-Deployment Evaluation based on prometheus metrics. -# TL;DR +## TL;DR + * You can install the whole demo including Keptn-lifecycle-toolkit using: `make install` * Deploy the PodTatoHead Demo Application: `make deploy-podtatohead` * Afterward, see it in action as defined here: [OpenTelemetry in Action](#seeing-the-opentelemetry-collector-in-action) -## Prerequisites: -This tutorial assumes, that you already installed the Keptn Lifecycle Controller (see https://github.com/keptn/lifecycle-toolkit). The installation instructions can be found [here](https://github.com/keptn/lifecycle-toolkit#deploy-the-latest-release). -As well, you have both Jaeger and the Prometheus Operator installed in your Cluster. -Also, please ensure that the Prometheus Operator has the required permissions to watch resources of the `keptn-lifecycle-toolkit-system` namespace (see https://prometheus-operator.dev/docs/kube/monitoring-other-namespaces/ as a reference). +## Prerequisites + +This tutorial assumes, that you already installed the Keptn Lifecycle Controller ( +see ). The installation instructions can be +found [here](https://github.com/keptn/lifecycle-toolkit#deploy-the-latest-release). +As well, you have both Jaeger and the Prometheus Operator installed in your Cluster. +Also, please ensure that the Prometheus Operator has the required permissions to watch resources of +the `keptn-lifecycle-toolkit-system` namespace ( +see as a reference). For setting up both Jaeger and Prometheus, please refer to their docs: -- [Jaeger Setup](https://github.com/jaegertracing/jaeger-operator) -- [Prometheus Operator Setup](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizing.md) +* [Jaeger Setup](https://github.com/jaegertracing/jaeger-operator) +* [Prometheus Operator Setup](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizing.md) -If you don't have an already existing installation of Jaeger [manifest](https://github.com/jaegertracing/jaeger-operator/releases/download/v1.38.0/jaeger-operator.yaml) or Prometheus, you can run these commands to +If you don't have an already existing installation of +Jaeger [manifest](https://github.com/jaegertracing/jaeger-operator/releases/download/v1.38.0/jaeger-operator.yaml) or +Prometheus, you can run these commands to have a basic installation up and running. ```shell - # Install Jaeger into the observability namespace and the Jaeger resource into the lifecycle-toolkit namespace kubectl create namespace observability kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.38.0/jaeger-operator.yaml -n observability @@ -34,23 +43,25 @@ kubectl apply --server-side -f config/prometheus/setup kubectl apply -f config/prometheus/ ``` -With these commands, the Jaeger and Prometheus Operator will be installed in the `observability` and `monitoring` namespaces, respectively. +With these commands, the Jaeger and Prometheus Operator will be installed in the `observability` and `monitoring` +namespaces, respectively. ## Configuring the OpenTelemetry Collector and Prometheus ServiceMonitor -Once Jaeger and Prometheus are installed, you can deploy and configure the OpenTelemetry collector using the manifests in the `config` directory: +Once Jaeger and Prometheus are installed, you can deploy and configure the OpenTelemetry collector using the manifests +in the `config` directory: -```sh +```shell kubectl apply -f config/otel-collector.yaml -n keptn-lifecycle-toolkit-system ``` -Also, please ensure that the `OTEL_COLLECTOR_URL` env vars of both the `klc-controller-manager`, -as well as the `keptn-scheduler` deployments are set appropriately. +Also, please ensure that the `OTEL_COLLECTOR_URL` env vars of both the `klc-controller-manager`, +as well as the `keptn-scheduler` deployments are set appropriately. By default, they are set to `otel-collector:4317`, which should be the correct value for this tutorial. Eventually, there should be a pod for the `otel-collector` deployment up and running: -```sh +```shell $ kubectl get pods -lapp=opentelemetry -n keptn-lifecycle-toolkit-system NAME READY STATUS RESTARTS AGE @@ -73,47 +84,60 @@ kubectl rollout restart deployment -n keptn-lifecycle-toolkit-system keptn-sched ## Seeing the OpenTelemetry Collector in action -After everything has been set up, use the lifecycle operator to deploy a workload (e.g. using the `single-service` or `podtato-head` example in the `examples` folder). -To showcase pre-Evaluation checks we created a new version of podtato-head app in assets/podtetohead-deployment-evaluation. -You can run ``make deploy-podtatohead`` to check pre-Evaluations of prometheus metrics both at app and workload instance level. -Once an example has been deployed, you can view the generated traces in Jaeger. To do so, please create a port-forward for the `jaeger-query` service: +After everything has been set up, use the lifecycle operator to deploy a workload (e.g. using the `single-service` +or `podtato-head` example in the `examples` folder). +To showcase pre-Evaluation checks we created a new version of podtato-head app in +assets/podtetohead-deployment-evaluation. +You can run ``make deploy-podtatohead`` to check pre-Evaluations of prometheus metrics both at app and workload instance +level. +Once an example has been deployed, you can view the generated traces in Jaeger. To do so, please create a port-forward +for the `jaeger-query` service: -```sh +```shell kubectl port-forward -n keptn-lifecycle-toolkit-system svc/jaeger-query 16686 ``` -Afterwards, you can view the Jaeger UI in the browser at [localhost:16686](http://localhost:16686). There you should see the traces generated by the lifecycle controller, which should look like this: +Afterwards, you can view the Jaeger UI in the browser at [localhost:16686](http://localhost:16686). There you should see +the traces generated by the lifecycle controller, which should look like this: -**Traces overview** +### Traces overview -![](./assets/traces_overview.png) +![Screenshot of the traces overview in Jaeger](./assets/traces_overview.png) -**Trace details** +### Trace details -![](./assets/trace_detail.png) +![Screenshot of a trace in Jaeger](./assets/trace_detail.png) -In Prometheus, do a port forward to the prometheus service inside your cluster (the exact name and namespace of the prometheus service will depend on your Prometheus setup - we are using the defaults that come with the example of the Prometheus Operator tutorial). +In Prometheus, do a port forward to the prometheus service inside your cluster (the exact name and namespace of the +prometheus service will depend on your Prometheus setup - we are using the defaults that come with the example of the +Prometheus Operator tutorial). -```sh +```shell kubectl -n monitoring port-forward svc/prometheus-k8s 9090 ``` -Afterwards, you can view the Prometheus UI in the browser at [localhost:9090](http://localhost:9090). There, in the [Targets](http://localhost:9090/targets?search=) section, you should see an entry for the otel-collector: +Afterwards, you can view the Prometheus UI in the browser at [localhost:9090](http://localhost:9090). There, in +the [Targets](http://localhost:9090/targets?search=) section, you should see an entry for the otel-collector: -![](./assets/prometheus_targets.png) +![Screenshot of a target in Prometheus](./assets/prometheus_targets.png) -Also, in the [Graph](http://localhost:9090/graph?g0.expr=&g0.tab=1&g0.stacked=0&g0.show_exemplars=0&g0.range_input=1h) section, you can retrieve metrics reported by the Keptn Lifecycle Controller (all of the available metrics start with the `keptn` prefix): +Also, in the [Graph](http://localhost:9090/graph?g0.expr=&g0.tab=1&g0.stacked=0&g0.show_exemplars=0&g0.range_input=1h) +section, you can retrieve metrics reported by the Keptn Lifecycle Controller (all of the available metrics start with +the `keptn` prefix): -![](./assets/metrics.png) +![Screenshot of the auto-complete menu in a Prometheus query](./assets/metrics.png) -To view the exported metrics in Grafana, we have provided dashboards which have been automatically installed with this example. To display them, please first create a port-forward for the `grafana` service in the `monitoring` namespace: +To view the exported metrics in Grafana, we have provided dashboards which have been automatically installed with this +example. To display them, please first create a port-forward for the `grafana` service in the `monitoring` namespace: -```sh +```shell make port-forward-grafana ``` -Now, you should be able to see it in the [Grafana UI](http://localhost:3000/d/wlo2MpIVk/keptn-lifecycle-toolkit-metrics) under `Dashboards > General`. +Now, you should be able to see it in the [Grafana UI](http://localhost:3000/d/wlo2MpIVk/keptn-lifecycle-toolkit-metrics) +under `Dashboards > General`. -![](./assets/grafana_dashboard.png) +![Screenshot of a dashboard in Grafana](./assets/grafana_dashboard.png) + diff --git a/examples/support/observability/config/prometheus/README.md b/examples/support/observability/config/prometheus/README.md index ad49f1f540..954e830a7d 100644 --- a/examples/support/observability/config/prometheus/README.md +++ b/examples/support/observability/config/prometheus/README.md @@ -1,8 +1,7 @@ -## Autogenerated Files - Do not change +# Autogenerated Files - Do not change -# Grafana Dashboards - ConfigMaps +## Grafana Dashboards - ConfigMaps -This files can be used to autoprovision Grafana dashboards in Kubernetes. - -More information: https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards +These files can be used to autoprovision Grafana dashboards in Kubernetes. +More information: diff --git a/functions-runtime/README.md b/functions-runtime/README.md index d3d5eadf6a..55f0194419 100644 --- a/functions-runtime/README.md +++ b/functions-runtime/README.md @@ -1,30 +1,51 @@ # Keptn Lifecycle Controller - Function Runtime ## Build -``` + +```shell docker build -t keptnsandbox/klc-runtime:${VERSION} . ``` ## Usage ### Docker with function on webserver (function in this repo) -``` -docker run -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/hello-world.ts -it keptnsandbox/klc-runtime:${VERSION} + +```shell +docker run \ + -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/hello-world.ts \ + -it \ + keptnsandbox/klc-runtime:${VERSION} ``` ### Docker with function and external data - scheduler -``` -docker run -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/scheduler.ts -e DATA='{ "targetDate":"2025-04-16T06:55:31.820Z" }' -it keptnsandbox/klc-runtime:${VERSION} + +```shell +docker run \ + -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/scheduler.ts \ + -e DATA='{ "targetDate":"2025-04-16T06:55:31.820Z" }' \ + -it \ + keptnsandbox/klc-runtime:${VERSION} ``` ### Docker with function and external secure data - slack -``` -docker run -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/slack.ts -e SECURE_DATA='{ "slack_hook":"hook/parts","text":"this is my test message" }' -it keptnsandbox/klc-runtime:${VERSION} + +```shell +docker run \ + -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/slack.ts \ + -e SECURE_DATA='{ "slack_hook":"hook/parts","text":"this is my test message" }' \ + -it \ + keptnsandbox/klc-runtime:${VERSION} ``` ### Docker with function and external data - prometheus -``` -docker run -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/prometheus.ts -e DATA='{ "url":"http://localhost:9090", "metrics": "up{service=\"kubernetes\"}", "expected_value": "1" }' -it ghcr.keptn.sh/keptn/functions-runtime:${VERSION} + +```shell +docker run \ + -e SCRIPT=https://raw.githubusercontent.com/keptn/lifecycle-toolkit/main/functions-runtime/samples/ts/prometheus.ts \ + -e DATA='{ "url":"http://localhost:9090", "metrics": "up{service=\"kubernetes\"}", "expected_value": "1" }' \ + -it \ + ghcr.keptn.sh/keptn/functions-runtime:${VERSION} ``` - \ No newline at end of file + + diff --git a/helm/chart/README.md b/helm/chart/README.md index 35608df70d..fe5c23dabf 100644 --- a/helm/chart/README.md +++ b/helm/chart/README.md @@ -1,6 +1,9 @@ -## Keptn Lifecycle Toolkit -KLT introduces a more cloud-native approach for pre- and post-deployment, as well as the concept of application health checks +# Keptn Lifecycle Toolkit +KLT introduces a more cloud-native approach for pre- and post-deployment, as well as the concept of application health +checks + + ## Parameters ### OpenTelemetry @@ -10,3 +13,5 @@ KLT introduces a more cloud-native approach for pre- and post-deployment, as wel | `otelCollector.url` | Sets the URL for the open telemetry collector | `otel-collector:4317` | | `deployment.imagePullPolicy` | Sets the image pull policy for kubernetes deployment | `Always` | + + diff --git a/klt-cert-manager/README.md b/klt-cert-manager/README.md index ca432cfa2c..7989b73675 100644 --- a/klt-cert-manager/README.md +++ b/klt-cert-manager/README.md @@ -1,35 +1,44 @@ # klt-cert-manager -The Keptn certificate manager ensures that the webhooks in the Lifecycle Toolkit operator can obtain a valid certificate to access the Kubernetes API server. + +The Keptn certificate manager ensures that the webhooks in the Lifecycle Toolkit operator can obtain a valid certificate +to access the Kubernetes API server. ## Description -This `klt-cert-manager` operator should only be installed when paired with the Lifecycle Toolkit operator versions 0.6.0 or above. -The TLS certificate is mounted as a volume in the LT operator pod and is renewed every 12 hours or every time the LT operator deployment changes. +This `klt-cert-manager` operator should only be installed when paired with the Lifecycle Toolkit operator versions 0.6.0 +or above. +The TLS certificate is mounted as a volume in the LT operator pod and is renewed every 12 hours or every time the LT +operator deployment changes. ## Getting Started -You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster. -**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows). + +You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for +testing, or run against a remote cluster. +**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever +cluster `kubectl cluster-info` shows). ### Running on the cluster + 1. Install Instances of Custom Resources: ```sh kubectl apply -f config/samples/ ``` -2. Build and push your image to the location specified by `IMG`: - +1. Build and push your image to the location specified by `IMG`: + ```sh make docker-build docker-push IMG=/cert-manager:tag ``` - -3. Deploy the controller to the cluster with the image specified by `IMG`: + +1. Deploy the controller to the cluster with the image specified by `IMG`: ```sh make deploy IMG=/cert-manager:tag ``` ### Uninstall CRDs + To delete the CRDs from the cluster: ```sh @@ -37,6 +46,7 @@ make uninstall ``` ### Undeploy controller + UnDeploy the controller to the cluster: ```sh @@ -46,19 +56,23 @@ make undeploy ## Contributing ### How it works -This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) -It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) -which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the cluster +This project aims to follow the +Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) + +It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) +which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the +cluster ### Test It Out + 1. Install the CRDs into the cluster: ```sh make install ``` -2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running): +1. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running): ```sh make run @@ -67,6 +81,7 @@ make run **NOTE:** You can also run this in one step by running: `make install run` ### Modifying the API definitions + If you are editing the API definitions, generate the manifests such as CRs or CRDs using: ```sh @@ -76,20 +91,3 @@ make manifests **NOTE:** Run `make --help` for more information on all potential `make` targets More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html) - -## License - -Copyright 2022. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - diff --git a/operator/README.md b/operator/README.md index 0ed5f49513..c59c55595c 100644 --- a/operator/README.md +++ b/operator/README.md @@ -1,33 +1,40 @@ # operator + // TODO(user): Add simple overview of use/purpose ## Description + // TODO(user): An in-depth paragraph about your project and overview of use ## Getting Started -You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster. -**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows). + +You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for +testing, or run against a remote cluster. +**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever +cluster `kubectl cluster-info` shows). ### Running on the cluster + 1. Install Instances of Custom Resources: ```sh kubectl apply -f config/samples/ ``` -2. Build and push your image to the location specified by `IMG`: - +1. Build and push your image to the location specified by `IMG`: + ```sh make docker-build docker-push IMG=/operator:tag ``` - -3. Deploy the controller to the cluster with the image specified by `IMG`: + +1. Deploy the controller to the cluster with the image specified by `IMG`: ```sh make deploy IMG=/operator:tag ``` ### Uninstall CRDs + To delete the CRDs from the cluster: ```sh @@ -35,6 +42,7 @@ make uninstall ``` ### Undeploy controller + UnDeploy the controller to the cluster: ```sh @@ -42,22 +50,27 @@ make undeploy ``` ## Contributing + // TODO(user): Add detailed information on how you would like others to contribute to this project ### How it works -This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) -It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) -which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the cluster +This project aims to follow the +Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) + +It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/) +which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the +cluster ### Test It Out + 1. Install the CRDs into the cluster: ```sh make install ``` -2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running): +1. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running): ```sh make run @@ -66,6 +79,7 @@ make run **NOTE:** You can also run this in one step by running: `make install run` ### Modifying the API definitions + If you are editing the API definitions, generate the manifests such as CRs or CRDs using: ```sh @@ -76,20 +90,5 @@ make manifests More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html) -## License - -Copyright 2022. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - - \ No newline at end of file + + diff --git a/operator/test/component/DEVELOPER.md b/operator/test/component/DEVELOPER.md index e51935f2f5..0a1ec158af 100644 --- a/operator/test/component/DEVELOPER.md +++ b/operator/test/component/DEVELOPER.md @@ -1,19 +1,22 @@ # Component tests + This test suite can run test verifying multiple Controllers -### Running on envtest cluster +## Running on envtest cluster -cd to operator folder, run +cd to operator folder, run ```make test``` Make test is the one-stop shop for downloading the binaries, setting up the test environment, and running the tests. -If you would like to run the generated bin for apiserver etcd etc. from your IDE copy them to the default path "/usr/local/kubebuilder/bin" +If you would like to run the generated bin for apiserver etcd etc. from your IDE copy them to the default path " +/usr/local/kubebuilder/bin" This way the default test setup will pick them up without specifying any ENVVAR. -For more info on kubebuilder envtest or to set up a real cluster behind the test have a look [here](https://book.kubebuilder.io/reference/envtest.html) +For more info on kubebuilder envtest or to set up a real cluster behind the test have a +look [here](https://book.kubebuilder.io/reference/envtest.html) After run a ```report.component-operator``` file will be generated with the results of each test: -``` +```text suite_test.go | passed [Feature:Performance] Load KeptnAppController should create the app version CR | passed KeptnAppController should update the status of the CR | passed @@ -24,67 +27,87 @@ KeptnAppController should update the spans | failed In each test you can add one or more new controllers to the suite_test similarly as follows: - controllers := []keptncontroller.Controller{&keptnapp.KeptnAppReconciler{ - Client: k8sManager.GetClient(), - Scheme: k8sManager.GetScheme(), - Recorder: k8sManager.GetEventRecorderFor("test-app-controller"), - Log: GinkgoLogr, - Tracer: tracer.Tracer("test-app-tracer"), - }} - setupManager(controllers) - +```go +controllers := []keptncontroller.Controller{&keptnapp.KeptnAppReconciler{ + +Client: k8sManager.GetClient(), +Scheme: k8sManager.GetScheme(), +Recorder: k8sManager.GetEventRecorderFor("test-app-controller"), +Log: GinkgoLogr, +Tracer: tracer.Tracer("test-app-tracer"), +}} +setupManager(controllers) +``` + After that the k8s API from kubebuilder will handle its CRD Each Ginkgo test should be structured following the [spec bestpractices](https://onsi.github.io/ginkgo/#writing-specs) As a minimum example, a test could be: -``` + + + +```go +package component + var _ = Describe("KeptnAppController", func() { - var ( //setup needed var - name string - ) - BeforeEach(func() { // init them - name = "test-app" - }) - AfterEach(ResetSpanRecords) //you must clean up spans each time - - Describe("Creation of AppVersion from a new App", func() { - var ( - instance *klcv1alpha2.KeptnApp // declare CRD - ) - Context("with one App", func() { - BeforeEach(func() { - //create it using the client eg. Expect(k8sClient.Create(ctx, instance)).Should(Succeed()) - instance = createInstanceInCluster(name, namespace, version) - }) - AfterEach(func() { - // Remember to clean up the cluster after each test - deleteAppInCluster(instance) - }) - It("should update the status of the CR", func() { - assertResourceUpdated(instance) - }) - }) - }) + var ( //setup needed var + name string + ) + BeforeEach(func() { // init them + name = "test-app" + }) + AfterEach(ResetSpanRecords) //you must clean up spans each time + + Describe("Creation of AppVersion from a new App", func() { + var ( + instance *klcv1alpha2.KeptnApp // declare CRD + ) + Context("with one App", func() { + BeforeEach(func() { + //create it using the client eg. Expect(k8sClient.Create(ctx, instance)).Should(Succeed()) + instance = createInstanceInCluster(name, namespace, version) + }) + AfterEach(func() { + // Remember to clean up the cluster after each test + deleteAppInCluster(instance) + }) + It("should update the status of the CR", func() { + assertResourceUpdated(instance) + }) + }) + }) }) ``` -## Load Tests + -You can append ```[Feature:Performance]``` to any spec you would like to execute during performance test with ```make performance-test``` the file -"load_test.go" contains examples of such tests, including a simple reporter. The report "MetricForLoadTestSuite" is generated for every run of the load test. +## Load Tests +You can append ```[Feature:Performance]``` to any spec you would like to execute during performance test +with ```make performance-test``` the file +"load_test.go" contains examples of such tests, including a simple reporter. The report "MetricForLoadTestSuite" is +generated for every run of the load test. ## Contributing Tips -1. Keep in mind to clean up after each test since the environment is shared. E.g. if you plan assertions on events or spans, make sure your specs are either ordered or assigned to their own controller -2. Namespaces do not get cleaned up by EnvTest, so do not make assertion based on the idea that the namespace has been deleted, and make sure to use `ignoreAlreadyExists(err error)` when creating a new one -3. EnvTest is a lightweight control plane only meant for testing purposes. This means it does not contain inbuilt Kubernetes controllers like deployment controllers, ReplicaSet controllers, etc. You cannot assert/verify for pods being created or not for created deployment. -4. You should generally try to use Gomega’s Eventually to make asynchronous assertions, especially in the case of Get and Update calls to API Server. +1. Keep in mind to clean up after each test since the environment is shared. E.g. if you plan assertions on events or + spans, make sure your specs are either ordered or assigned to their own controller +2. Namespaces do not get cleaned up by EnvTest, so do not make assertion based on the idea that the namespace has been + deleted, and make sure to use `ignoreAlreadyExists(err error)` when creating a new one +3. EnvTest is a lightweight control plane only meant for testing purposes. This means it does not contain inbuilt + Kubernetes controllers like deployment controllers, ReplicaSet controllers, etc. You cannot assert/verify for pods + being created or not for created deployment. +4. You should generally try to use Gomega’s Eventually to make asynchronous assertions, especially in the case of Get + and Update calls to API Server. 5. Use ginkgo --until-it-fails to identify flaky tests. -6. Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the wait package and include functionality like Poll. The full name is wait.Poll. -7. All filenames should be lowercase. +6. Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your + desired function. For example, the utility functions dealing with waiting for operations are in the wait package and + include functionality like Poll. The full name is wait.Poll. +7. All filenames should be lowercase. 8. Go source files and directories use underscores, not dashes. -9. Package directories should generally avoid using separators as much as possible. When package names are multiple words, they usually should be in nested subdirectories. -10. Document directories and filenames should use dashes rather than underscores. -11. Examples should also illustrate best practices for configuration and using the [system](https://kubernetes.io/docs/concepts/configuration/overview/). +9. Package directories should generally avoid using separators as much as possible. When package names are multiple + words, they usually should be in nested subdirectories. +10. Document directories and filenames should use dashes rather than underscores. +11. Examples should also illustrate best practices for configuration and using + the [system](https://kubernetes.io/docs/concepts/configuration/overview/). diff --git a/operator/test/e2e/DEVELOPER.md b/operator/test/e2e/DEVELOPER.md index 0bf8c8328e..7a7d160d32 100644 --- a/operator/test/e2e/DEVELOPER.md +++ b/operator/test/e2e/DEVELOPER.md @@ -1,9 +1,10 @@ # Integration/ E2E tests + This test suite can run test verifying the operator -### Running on kind cluster +## Running on kind cluster -``` +```shell kind create cluster cd lifecycle-toolkit make build-deploy-operator RELEASE_REGISTRY=yourregistry @@ -14,11 +15,12 @@ wait for everything to be up and running, then cd to operator folder and run ```make e2e-test``` -If you would like more info on kubebuilder envtest or to set up a real cluster behind the test have a look [here](https://book.kubebuilder.io/reference/envtest.html) +If you would like more info on kubebuilder envtest or to set up a real cluster behind the test have a +look [here](https://book.kubebuilder.io/reference/envtest.html) After the run a ```report.E2E-operator``` file will be generated with the results of each test: -``` +```text 2022-11-04 12:46:05.2373262 +0000 UTC If annotated for keptn, a new Pod should stay pending | passed If annotated for keptn, a new Pod should be assigned to keptn scheduler | passed @@ -26,21 +28,27 @@ If annotated for keptn, a new Pod should be assigned to keptn scheduler | passed ## Contributing +## Load Tests - -## Load Tests - -You can append ```[Feature:Performance]``` to any spec you would like to execute during performance test with ```make performance-test``` the file -"load_test.go" contains examples of such tests, including a simple reporter. The report "MetricForLoadTestSuite" is generated for every run of the load test. +You can append ```[Feature:Performance]``` to any spec you would like to execute during performance test +with ```make performance-test``` the file +"load_test.go" contains examples of such tests, including a simple reporter. The report "MetricForLoadTestSuite" is +generated for every run of the load test. ## Contributing Tips -1. Keep in mind to clean up after each test since the environment is shared. E.g. if you plan assertions on events or spans, make sure your specs are either ordered or assigned to their own controller -2. You should generally try to use Gomega’s Eventually to make asynchronous assertions, especially in the case of Get and Update calls to API Server. +1. Keep in mind to clean up after each test since the environment is shared. E.g. if you plan assertions on events or + spans, make sure your specs are either ordered or assigned to their own controller +2. You should generally try to use Gomega’s Eventually to make asynchronous assertions, especially in the case of Get + and Update calls to API Server. 3. Use ginkgo --until-it-fails to identify flaky tests. -4. Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the wait package and include functionality like Poll. The full name is wait.Poll. -5. All filenames should be lowercase. +4. Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your + desired function. For example, the utility functions dealing with waiting for operations are in the wait package and + include functionality like Poll. The full name is wait.Poll. +5. All filenames should be lowercase. 6. Go source files and directories use underscores, not dashes. -7. Package directories should generally avoid using separators as much as possible. When package names are multiple words, they usually should be in nested subdirectories. -8. Document directories and filenames should use dashes rather than underscores. -9. Examples should also illustrate best practices for configuration and using the [system](https://kubernetes.io/docs/concepts/configuration/overview/). +7. Package directories should generally avoid using separators as much as possible. When package names are multiple + words, they usually should be in nested subdirectories. +8. Document directories and filenames should use dashes rather than underscores. +9. Examples should also illustrate best practices for configuration and using + the [system](https://kubernetes.io/docs/concepts/configuration/overview/). diff --git a/scheduler/README.md b/scheduler/README.md index d5a43526e4..5b4c2254b6 100644 --- a/scheduler/README.md +++ b/scheduler/README.md @@ -1,35 +1,42 @@ # scheduler + // TODO(user): Add simple overview of use/purpose ## Description + // TODO(user): An in-depth paragraph about your project and overview of use ## Getting Started -You’ll need a Kubernetes cluster v0.24.0 or higher to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster. -**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows). + +You’ll need a Kubernetes cluster v0.24.0 or higher to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a +local cluster for testing, or run against a remote cluster. +**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever +cluster `kubectl cluster-info` shows). ### Running on the cluster + 1. Build and push your image to the location specified by `RELEASE_REGISTRY`: - + ```sh make build-and-push-local RELEASE_REGISTRY= ``` **NOTE:** Run `make --help` for more information on all potential `make` targets -2. Generate your release manifest +1. Generate your release manifest ```sh make release-manifests RELEASE_REGISTRY= ``` -3. Deploy the scheduler using kubectl: +1. Deploy the scheduler using kubectl: ```sh kubectl apply -f ./config/rendered/release.yaml # install the scheduler ``` -### Uninstall +### Uninstall + To delete the scheduler: ```sh @@ -37,26 +44,11 @@ kubectl delete -f ./config/rendered/release.yaml # uninstall the scheduler ``` ## Contributing + // TODO(user): Add detailed information on how you would like others to contribute to this project ### How it works -This project uses the Kubernetes [Scheduler Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/) -and is based on the [Scheduler Plugins Repository](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master). - -## License - -Copyright 2022. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - - \ No newline at end of file +This project uses the +Kubernetes [Scheduler Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/) +and is based on the [Scheduler Plugins Repository](https://github.com/kubernetes-sigs/scheduler-plugins/tree/master). diff --git a/scheduler/test/e2e/DEVELOPER.md b/scheduler/test/e2e/DEVELOPER.md index 957f9b6266..c79e66c262 100644 --- a/scheduler/test/e2e/DEVELOPER.md +++ b/scheduler/test/e2e/DEVELOPER.md @@ -1,35 +1,41 @@ # E2E tests -This test suite can run tests to verify the scheduler. The tests rely on a real cluster with an already installed keptn-scheduler -### Running on kind cluster +This test suite can run tests to verify the scheduler. The tests rely on a real cluster with an already installed +keptn-scheduler -``` +## Running on kind cluster + +```shell kind create cluster cd lifecycle-toolkit make build-deploy-scheduler RELEASE_REGISTRY=yourregistry ``` -wait for everything to be up and running, then cd to scheduler folder and run +wait for everything to be up and running, then cd to scheduler folder and run ```make e2e-test``` -For more info on kubebuilder envtest or to set up a real cluster behind the test have a look [here](https://book.kubebuilder.io/reference/envtest.html) +For more info on kubebuilder envtest or to set up a real cluster behind the test have a +look [here](https://book.kubebuilder.io/reference/envtest.html) After the run a ```report.E2E-scheduler``` file will be generated with the results of each test - ## Contributing - - ## Contributing Tips -1. Keep in mind to clean up after each test since the environment is shared. E.g. if you plan assertions on events or spans, make sure your specs are either ordered or assigned to their own controller -2. You should generally try to use Gomega’s Eventually to make asynchronous assertions, especially in the case of Get and Update calls to API Server. +1. Keep in mind to clean up after each test since the environment is shared. E.g. if you plan assertions on events or + spans, make sure your specs are either ordered or assigned to their own controller +2. You should generally try to use Gomega’s Eventually to make asynchronous assertions, especially in the case of Get + and Update calls to API Server. 3. Use ginkgo --until-it-fails to identify flaky tests. -4. Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the wait package and include functionality like Poll. The full name is wait.Poll. -5. All filenames should be lowercase. +4. Avoid general utility packages. Packages called "util" are suspect. Instead, derive a name that describes your + desired function. For example, the utility functions dealing with waiting for operations are in the wait package and + include functionality like Poll. The full name is wait.Poll. +5. All filenames should be lowercase. 6. Go source files and directories use underscores, not dashes. -7. Package directories should generally avoid using separators as much as possible. When package names are multiple words, they usually should be in nested subdirectories. -8. Document directories and filenames should use dashes rather than underscores. -9. Examples should also illustrate best practices for configuration and using the [system](https://kubernetes.io/docs/concepts/configuration/overview/). +7. Package directories should generally avoid using separators as much as possible. When package names are multiple + words, they usually should be in nested subdirectories. +8. Document directories and filenames should use dashes rather than underscores. +9. Examples should also illustrate best practices for configuration and using + the [system](https://kubernetes.io/docs/concepts/configuration/overview/).