diff --git a/.gitignore b/.gitignore index 2261d8a0e1..98695efe28 100644 --- a/.gitignore +++ b/.gitignore @@ -11,15 +11,6 @@ # Output of the go coverage tool, specifically when used with LiteIDE *.out -# Kubernetes demo -k8s/demo/.vagrant/ -k8s/demo/*.log -k8s/demo/*test* - -# Kubernetes lib -k8s/lib/vagrant/*.box -k8s/lib/vagrant/*.log - # Local build for docs documentation/build/ diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index a25ba212f2..663c10ef3f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -8,10 +8,14 @@ However, for those individuals who want a bit more guidance on the best way to c That said, OpenEBS is an innovation in Open Source. You are welcome to contribute in any way you can and all the help provided is very much appreciated. -- [Raise issues to request new functionality, fix documentation or for reporting bugs.](#raising-issues) -- [Submit changes to improve documentation.](#submit-change-to-improve-documentation) -- [Submit proposals for new features/enhancements.](#submit-proposals-for-new-features) -- [Solve existing issues related to documentation or code.](#contributing-to-source-code-and-bug-fixes) +- [Contributing to OpenEBS](#contributing-to-openebs) + - [Raising Issues](#raising-issues) + - [Submit Change to Improve Documentation](#submit-change-to-improve-documentation) + - [Submit Proposals for New Features](#submit-proposals-for-new-features) + - [Contributing to Source Code and Bug Fixes](#contributing-to-source-code-and-bug-fixes) + - [Solve Existing Issues](#solve-existing-issues) + - [Sign your work](#sign-your-work) + - [Join our community](#join-our-community) There are a few simple guidelines that you need to follow before providing your hacks. @@ -39,7 +43,7 @@ There is always something more that is required, to make it easier to suit your Provide PRs with appropriate tags for bug fixes or enhancements to the source code. For a list of tags that could be used, see [this](./contribute/labels-of-issues.md). - For contributing to K8s demo, please refer to this [document](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - For checking out how OpenEBS works with K8s, refer to this [document](./k8s/README.md) + - For checking out how OpenEBS works with K8s, refer to our [documentation](https://openebs.io/docs) * For contributing to Kubernetes OpenEBS Provisioner, please refer to this [document](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). diff --git a/contribute/CONTRIBUTING-TO-K8S-DEMO.md b/contribute/CONTRIBUTING-TO-K8S-DEMO.md index c08b45c9d8..0b5d85874c 100644 --- a/contribute/CONTRIBUTING-TO-K8S-DEMO.md +++ b/contribute/CONTRIBUTING-TO-K8S-DEMO.md @@ -2,11 +2,9 @@ This document describes the process for adding or improving the existing examples of applications using OpenEBS Volumes. -Kubernetes YAML files for running application using OpenEBS Volumes are located under the folder [openebs/k8s/demo](https://github.com/openebs/openebs/tree/master/k8s/demo) - Each application example should comprise of the following: -- K8s YAML file(s) for starting the application and its associated components. The volumes should point to the OpenEBS Storage Class. If the existing storage-classes does not suit the need, create a new storage class at [openebs-storageclasses.yaml](../k8s/openebs-storageclasses.yaml). +- K8s YAML file(s) for starting the application and its associated components. The volumes should point to the OpenEBS Storage Class. If the existing storage-classes do not suit the need, you may create a new storage class. Refer to our [documentation](https://openebs.io/docs) for examples. - K8s YAML file(s) for starting a client that accesses the application. This is optional, in case the application itself provides a mechanism like in Jupyter, Wordpress, etc. When demonstrating a database-like application like Apache Cassandra, Redis, and so on, it is recommended to have such a mechanism to test that the application has been launched. diff --git a/contribute/design/1.x/jiva/2019152019-jiva-autosnap-deletion.md b/contribute/design/1.x/jiva/2019152019-jiva-autosnap-deletion.md index 9f7abf4bb1..48ba62e20e 100644 --- a/contribute/design/1.x/jiva/2019152019-jiva-autosnap-deletion.md +++ b/contribute/design/1.x/jiva/2019152019-jiva-autosnap-deletion.md @@ -24,14 +24,15 @@ superseded-by: ## Table of Contents - * [Table of Contents](#table-of-contents) -* [Summary](#summary) -* [Motivation](#motivation) - * [Goals](#goals) -* [Proposal](#proposal) - * [Implementation Details](#implementation-details) -* [Performance impact](#performance-impact) -* [Alternatives](#alternatives) +- [Jiva automatic snapshot deletion](#jiva-automatic-snapshot-deletion) + - [Table of Contents](#table-of-contents) + - [Summary](#summary) + - [Motivation](#motivation) + - [Goals](#goals) + - [Proposal](#proposal) + - [Implementation Details](#implementation-details) + - [Performance impact](#performance-impact) + - [Alternatives](#alternatives) ## Summary @@ -98,9 +99,6 @@ superseded-by: snapshot deletion is in progress by validating the chain and restart is required to rebuild the replica to be on the safer side. - NOTE: A script is written to delete the given no of snapshots automatically. You - can find the script [here](https://github.com/openebs/openebs/blob/master/k8s/jiva/snapshot-cleanup.sh) - b) **Cleanup in background by picking the snapshots with smallest size**: - Run a goroutine which will pick up snapshots based on its size and start diff --git a/contribute/design/1.x/upgrade/volume-pools-upgrade.md b/contribute/design/1.x/upgrade/volume-pools-upgrade.md index 42150d14a8..f1593946d9 100644 --- a/contribute/design/1.x/upgrade/volume-pools-upgrade.md +++ b/contribute/design/1.x/upgrade/volume-pools-upgrade.md @@ -22,22 +22,27 @@ superseded-by: ## Table of Contents -* [Table of Contents](#table-of-contents) -* [Summary](#summary) -* [Motivation](#motivation) - * [Goals](#goals) - * [Non-Goals](#non-goals) -* [Proposal](#proposal) - * [User Stories](#user-stories) - * [Design Constraints](#design-constraints) - * [Proposed Implementation](#proposed-implementation) - * [High Level Design](#high-level-design) - * [Risks and Mitigations](#risks-and-mitigations) -* [Graduation Criteria](#graduation-criteria) -* [Implementation History](#implementation-history) -* [Drawbacks](#drawbacks) -* [Alternatives](#alternatives) -* [Infrastructure Needed](#infrastructure-needed) +- [Upgrade via Kubernetes Job](#upgrade-via-kubernetes-job) + - [Table of Contents](#table-of-contents) + - [Summary](#summary) + - [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) + - [Proposal](#proposal) + - [User Stories](#user-stories) + - [Design Constraints](#design-constraints) + - [Proposed Implementation](#proposed-implementation) + - [Backward Compatibility](#backward-compatibility) + - [Design Choices/Decisions](#design-choicesdecisions) + - [High Level Design](#high-level-design) + - [Upgrade Job Example](#upgrade-job-example) + - [UpgradeTask CR Example](#upgradetask-cr-example) + - [Risks and Mitigations](#risks-and-mitigations) + - [Graduation Criteria](#graduation-criteria) + - [Implementation History](#implementation-history) + - [Drawbacks](#drawbacks) + - [Alternatives](#alternatives) + - [Infrastructure Needed](#infrastructure-needed) ## Summary @@ -208,9 +213,7 @@ This design proposes the following key changes: UpgradeJob is executed on an already upgraded resource, it will return success. - Note: This replaces the script based upgrades from OpenEBS 1.0. - Sample Kubernetes YAMLs for upgrading various resources can be - found [here](../../../../k8s/upgrades/1.0.0-1.1.0/). + Note: This replaces the script based upgrades from OpenEBS 1.0. Status: Available in 1.1 and supports upgrading of Jiva Volumes, cStor Volumes and cStor pools from 1.0 to 1.1 @@ -353,7 +356,6 @@ and the reasoning behind selecting a certain approach. - How does this design compared to the `kubectl` based upgrade - introduced for upgrading from [0.8.2 to 0.9](https://github.com/openebs/openebs/tree/master/k8s/upgrades/0.8.2-0.9.0). The current design proposed in this document builds on top of the 0.8.2 to 0.9 design, by improving on usability and agility of the diff --git a/k8s/README.md b/k8s/README.md deleted file mode 100644 index 01bb6b81ab..0000000000 --- a/k8s/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# Using OpenEBS with K8s - -**_The artifacts in this repository contain unreleased changes._** - -If you are looking at deploying from a stable release, please follow the instructions at [Quick Start Guide](https://docs.openebs.io/docs/next/quickstartguide.html) - -If this is your first time to Kubernetes, please go through these introductory tutorials: -- https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615 -- https://kubernetes.io/docs/tutorials/kubernetes-basics/ - -## Usage - -### Installing Pre-released OpenEBS **_(at your own risk)_** -``` -kubectl apply -f openebs-operator.yaml -``` - -### (Optional) Enable monitoring using prometheus and grafana - -Use this step if you don't already have monitoring services installed -on your k8s cluster. - -``` -kubectl apply -f openebs-monitoring-pg.yaml -``` - -You can also obtain Kubernetes resource metrics via the following step: - -``` -kubectl apply -f openebs-kube-state-metrics.yaml -``` - -This folder also contains a set of dashboards that can be imported into your grafana: -- [OpenEBS Persistent Volume Dashboard](https://github.com/openebs/openebs/blob/master/k8s/openebs-pg-dashboard.json) -- [OpenEBS Storage Pool Dashboard](https://github.com/openebs/openebs/blob/master/k8s/openebs-pool-exporter.json) -- [Node Exporter Dashboard](https://github.com/openebs/openebs/blob/master/k8s/openebs-node-exporter.json) -- [Kubernetes cAdvisor Dashboard](https://github.com/openebs/openebs/blob/master/k8s/openebs-kubelet-cAdvisor.json)(metrics segregated by node) -- [Kubernetes App Metrics Dashboard](https://github.com/openebs/openebs/blob/master/k8s/openebs-kube-state-metrics.json)(metrics segregated by namespace) - -### (Optional) Enable monitoring using Prometheus Operator - -It is assumed that you have prometheus operator already deployed and working fine. If not done already follow instructions to do it [here](https://github.com/helm/charts/tree/master/stable/prometheus-operator#installing-the-chart). - -While deploying above chart make sure the values of specified field are as follows: - -```yaml - serviceMonitorSelectorNilUsesHelmValues: false - serviceMonitorSelector: {} - serviceMonitorNamespaceSelector: {} -``` - -This makes sure that all the [`ServiceMonitor` objects](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#related-resources) from all namespaces we create further will be selected. - -Now to monitor openebs related resources create following `ServiceMonitor` object in `openebs` namespace. See [openebs-servicemonitor.yaml](openebs-servicemonitor.yaml) file. - -```bash -kubectl -n openebs apply -f openebs-servicemonitor.yaml -``` - -Find [docs here](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#servicemonitor) to read about other fields in `ServiceMonitor` object. - -Now once this config is picked up by prometheus operator it will start scraping the metrics and you can start seeing them in prometheus dashboard. The metrics relevant to openebs are conveniently prefixed with `openebs*` or `latest_openebs*`. - -### (Optional) Enable alerting using prometheus alert manager - -If you would like to receive alerts for specific Node, Kubernetes & OpenEBS conditions, setup the alert-manager: - -``` -kubectl apply -f openebs-alertmanager.yaml -``` - -NOTE: The alert rules are currently placed into the prometheus configmag in openebs-monitoring-pg.yaml. - -### (Optional) Setup Log collection using grafana loki - -Use the following step (requires setup of helm client & tiller server on the server) to setup grafana loki stack on the cluster. On the grafana console, select `loki` as -the datasource and provide the appropriate URL (typically http://loki:3100) to visualize logs. - -``` -helm repo add loki https://grafana.github.io/loki/charts -``` -``` -helm repo update -``` -``` -helm upgrade --install loki --namespace=openebs loki/loki-stack -``` - -NOTE: A sample template specification of the components in the loki stack can be found [here](sample-loki-templates.md) obtained as part of `helm --debug --dry-run` command. - diff --git a/k8s/charts/openebs/Chart.yaml b/k8s/charts/openebs/Chart.yaml deleted file mode 100644 index 46a291a575..0000000000 --- a/k8s/charts/openebs/Chart.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: v1 -version: 1.6.0 -name: openebs -appVersion: 1.6.0 -description: Containerized Storage for Containers -icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/openebs/icon/color/openebs-icon-color.png -home: http://www.openebs.io/ -keywords: - - cloud-native-storage - - block-storage - - iSCSI - - storage -sources: - - https://github.com/openebs/openebs -maintainers: - - name: kmova - email: kiran.mova@openebs.io - - name: prateekpandey14 - email: prateek.pandey@openebs.io diff --git a/k8s/charts/openebs/README.md b/k8s/charts/openebs/README.md deleted file mode 100644 index 4587a928b9..0000000000 --- a/k8s/charts/openebs/README.md +++ /dev/null @@ -1,145 +0,0 @@ - ------------------------------------------------------------------------------- -IMPORTANT!! - -DEPRECATION NOTICE: - -The support for this chart will be discontinued soon. Please plan to migrate -and use stable/openebs chart located at: - [https://github.com/helm/charts/tree/master/stable/openebs](https://github.com/helm/charts/tree/master/stable/openebs) - ------------------------------------------------------------------------------- - -## Prerequisites - -- Kubernetes 1.9.7+ with RBAC enabled -- iSCSI PV support in the underlying infrastructure -- Helm is installed and the Tiller has admin privileges. To assign admin - to tiller, login as admin and use the following instructions: - - ```shell - kubectl -n kube-system create sa tiller - kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller - kubectl -n kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}' - kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}' - ``` - -- A namespace called "openebs" is created in the Cluster for running the - below instructions: `kubectl create namespace openebs` - -## Installing OpenEBS Charts Repository - -```shell -helm repo add openebs-charts https://openebs.github.io/charts/ -helm repo update -helm install openebs-charts/openebs --name openebs --namespace openebs -``` - -## Installing OpenEBS from this codebase - -```shell -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/charts/openebs/ -helm install --name openebs --namespace openebs . -``` - -## Verify that OpenEBS Volumes can be created - -```shell -#Check the OpenEBS Management Pods are running. -kubectl get pods -n openebs -#Create a test PVC -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/pvc.yaml -#Check the OpenEBS Volume Pods are created. -kubectl get pods -#Delete the test volume and associated Volume Pods. -kubectl delete -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/pvc.yaml - -``` - -## Unistalling OpenEBS from Chart codebase - -```shell -helm ls --all -# Note the openebs-chart-name from above command -helm del --purge -``` - -## Configuration - -The following table lists the configurable parameters of the OpenEBS chart and their default values. - -| Parameter | Description | Default | -| ----------------------------------------| --------------------------------------------- | ----------------------------------------- | -| `rbac.create` | Enable RBAC Resources | `true` | -| `rbac.pspEnabled` | Create pod security policy resources | `false` | -| `image.pullPolicy` | Container pull policy | `IfNotPresent` | -| `apiserver.enabled` | Enable API Server | `true` | -| `apiserver.image` | Image for API Server | `quay.io/openebs/m-apiserver` | -| `apiserver.imageTag` | Image Tag for API Server | `1.6.0` | -| `apiserver.replicas` | Number of API Server Replicas | `1` | -| `apiserver.sparse.enabled` | Create Sparse Pool based on Sparsefile | `false` | -| `provisioner.enabled` | Enable Provisioner | `true` | -| `provisioner.image` | Image for Provisioner | `quay.io/openebs/openebs-k8s-provisioner` | -| `provisioner.imageTag` | Image Tag for Provisioner | `1.6.0` | -| `provisioner.replicas` | Number of Provisioner Replicas | `1` | -| `localprovisioner.enabled` | Enable localProvisioner | `true` | -| `localprovisioner.image` | Image for localProvisioner | `quay.io/openebs/provisioner-localpv` | -| `localprovisioner.imageTag` | Image Tag for localProvisioner | `1.6.0` | -| `localprovisioner.replicas` | Number of localProvisioner Replicas | `1` | -| `localprovisioner.basePath` | BasePath for hostPath volumes on Nodes | `/var/openebs/local` | -| `webhook.enabled` | Enable admission server | `true` | -| `webhook.image` | Image for admission server | `quay.io/openebs/admission-server` | -| `webhook.imageTag` | Image Tag for admission server | `1.6.0` | -| `webhook.replicas` | Number of admission server Replicas | `1` | -| `snapshotOperator.enabled` | Enable Snapshot Provisioner | `true` | -| `snapshotOperator.provisioner.image` | Image for Snapshot Provisioner | `quay.io/openebs/snapshot-provisioner` | -| `snapshotOperator.provisioner.imageTag` | Image Tag for Snapshot Provisioner | `1.6.0` | -| `snapshotOperator.controller.image` | Image for Snapshot Controller | `quay.io/openebs/snapshot-controller` | -| `snapshotOperator.controller.imageTag` | Image Tag for Snapshot Controller | `1.6.0` | -| `snapshotOperator.replicas` | Number of Snapshot Operator Replicas | `1` | -| `ndm.enabled` | Enable Node Disk Manager | `true` | -| `ndm.image` | Image for Node Disk Manager | `quay.io/openebs/node-disk-manager-amd64` | -| `ndm.imageTag` | Image Tag for Node Disk Manager | `v0.4.6` | -| `ndm.sparse.path` | Directory where Sparse files are created | `/var/openebs/sparse` | -| `ndm.sparse.size` | Size of the sparse file in bytes | `10737418240` | -| `ndm.sparse.count` | Number of sparse files to be created | `0` | -| `ndm.filters.excludeVendors` | Exclude devices with specified vendor | `CLOUDBYT,OpenEBS` | -| `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` | -| `ndm.filters.includePaths` | Include devices with specified path patterns | `""` | -| `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` | -| `ndm.probes.enableSeachest` | Enable Seachest probe for NDM | `false` | -| `ndmOperator.enabled` | Enable NDM Operator | `true` | -| `ndmOperator.image` | Image for NDM Operator | `quay.io/openebs/node-disk-operator-amd64`| -| `ndmOperator.imageTag` | Image Tag for NDM Operator | `v0.4.6` | -| `jiva.image` | Image for Jiva | `quay.io/openebs/jiva` | -| `jiva.imageTag` | Image Tag for Jiva | `1.6.0` | -| `jiva.replicas` | Number of Jiva Replicas | `3` | -| `jiva.defaultStoragePath` | hostpath used by default Jiva StorageClass | `/var/openebs` | -| `cstor.pool.image` | Image for cStor Pool | `quay.io/openebs/cstor-pool` | -| `cstor.pool.imageTag` | Image Tag for cStor Pool | `1.6.0` | -| `cstor.poolMgmt.image` | Image for cStor Pool Management | `quay.io/openebs/cstor-pool-mgmt` | -| `cstor.poolMgmt.imageTag` | Image Tag for cStor Pool Management | `1.6.0` | -| `cstor.target.image` | Image for cStor Target | `quay.io/openebs/cstor-istgt` | -| `cstor.target.imageTag` | Image Tag for cStor Target | `1.6.0` | -| `cstor.volumeMgmt.image` | Image for cStor Volume Management | `quay.io/openebs/cstor-volume-mgmt` | -| `cstor.volumeMgmt.imageTag` | Image Tag for cStor Volume Management | `1.6.0` | -| `helper.image` | Image for helper | `quay.io/openebs/linux-utils` | -| `helper.imageTag` | Image Tag for helper | `1.6.0` | -| `policies.monitoring.image` | Image for Prometheus Exporter | `quay.io/openebs/m-exporter` | -| `policies.monitoring.imageTag` | Image Tag for Prometheus Exporter | `1.6.0` | -| `analytics.enabled` | Enable sending stats to Google Analytics | `true` | -| `analytics.pingInterval` | Duration(hours) between sending ping stat | `24h` | -| `defaultStorageConfig.enabled` | Enable default storage class installation | `true` | -| `HealthCheck.initialDelaySeconds` | Delay before liveness probe is initiated | `30` | | 30 | -| `HealthCheck.periodSeconds` | How often to perform the liveness probe | `60` | | 10 | - -Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. - -Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, - -```shell -helm install --name openebs -f values.yaml openebs-charts/openebs -``` - -> **Tip**: You can use the default [values.yaml](values.yaml) diff --git a/k8s/charts/openebs/templates/NOTES.txt b/k8s/charts/openebs/templates/NOTES.txt deleted file mode 100644 index 8f14ecedb7..0000000000 --- a/k8s/charts/openebs/templates/NOTES.txt +++ /dev/null @@ -1,28 +0,0 @@ -The OpenEBS has been installed. Check its status by running: -$ kubectl get pods -n {{ .Release.Namespace }} - -For dynamically creating OpenEBS Volumes, you can either create a new StorageClass or -use one of the default storage classes provided by OpenEBS. - -Use `kubectl get sc` to see the list of installed OpenEBS StorageClasses. A sample -PVC spec using `openebs-jiva-default` StorageClass is given below:" - ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- - -For more information, please visit http://docs.openebs.io/. - -Please note that, OpenEBS uses iSCSI for connecting applications with the -OpenEBS Volumes and your nodes should have the iSCSI initiator installed. - diff --git a/k8s/charts/openebs/templates/_helpers.tpl b/k8s/charts/openebs/templates/_helpers.tpl deleted file mode 100644 index 09c63c5a4a..0000000000 --- a/k8s/charts/openebs/templates/_helpers.tpl +++ /dev/null @@ -1,43 +0,0 @@ -{{/* vim: set filetype=mustache: */}} -{{/* -Expand the name of the chart. -*/}} -{{- define "openebs.name" -}} -{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} -{{- end -}} - -{{/* -Create a default fully qualified app name. -We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). -If release name contains chart name it will be used as a full name. -*/}} -{{- define "openebs.fullname" -}} -{{- if .Values.fullnameOverride -}} -{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} -{{- else -}} -{{- $name := default .Chart.Name .Values.nameOverride -}} -{{- if contains $name .Release.Name -}} -{{- .Release.Name | trunc 63 | trimSuffix "-" -}} -{{- else -}} -{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} -{{- end -}} -{{- end -}} -{{- end -}} - -{{/* -Create chart name and version as used by the chart label. -*/}} -{{- define "openebs.chart" -}} -{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} -{{- end -}} - -{{/* -Create the name of the service account to use -*/}} -{{- define "openebs.serviceAccountName" -}} -{{- if .Values.serviceAccount.create -}} - {{ default (include "openebs.fullname" .) .Values.serviceAccount.name }} -{{- else -}} - {{ default "default" .Values.serviceAccount.name }} -{{- end -}} -{{- end -}} diff --git a/k8s/charts/openebs/templates/clusterrole.yaml b/k8s/charts/openebs/templates/clusterrole.yaml deleted file mode 100644 index c91c0f6773..0000000000 --- a/k8s/charts/openebs/templates/clusterrole.yaml +++ /dev/null @@ -1,65 +0,0 @@ -{{- if .Values.rbac.create }} -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: {{ template "openebs.fullname" . }} - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} -rules: -- apiGroups: ["*"] - resources: ["nodes", "nodes/proxy"] - verbs: ["*"] -- apiGroups: ["*"] - resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs" ] - verbs: ["*"] -- apiGroups: ["*"] - resources: ["statefulsets", "daemonsets"] - verbs: ["*"] -- apiGroups: ["*"] - resources: ["resourcequotas", "limitranges"] - verbs: ["list", "watch"] -- apiGroups: ["*"] - resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"] - verbs: ["list", "watch"] -- apiGroups: ["*"] - resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"] - verbs: ["*"] -- apiGroups: ["volumesnapshot.external-storage.k8s.io"] - resources: ["volumesnapshots", "volumesnapshotdatas"] - verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] -- apiGroups: ["apiextensions.k8s.io"] - resources: ["customresourcedefinitions"] - verbs: [ "get", "list", "create", "update", "delete", "patch"] -- apiGroups: ["*"] - resources: [ "disks", "blockdevices", "blockdeviceclaims"] - verbs: ["*" ] -- apiGroups: ["*"] - resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"] - verbs: ["*" ] -- apiGroups: ["*"] - resources: [ "castemplates", "runtasks"] - verbs: ["*" ] -- apiGroups: ["*"] - resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims", "cstorvolumepolicies"] - verbs: ["*" ] -- apiGroups: ["*"] - resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"] - verbs: ["*" ] -- apiGroups: ["*"] - resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"] - verbs: ["*" ] -- apiGroups: ["*"] - resources: [ "upgradetasks"] - verbs: ["*" ] -- apiGroups: ["coordination.k8s.io"] - resources: ["leases"] - verbs: ["get", "watch", "list", "delete", "update", "create"] -- apiGroups: ["admissionregistration.k8s.io"] - resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"] - verbs: ["get", "create", "list", "delete", "update", "patch"] -- nonResourceURLs: ["/metrics"] - verbs: ["get"] -{{- end }} diff --git a/k8s/charts/openebs/templates/clusterrolebinding.yaml b/k8s/charts/openebs/templates/clusterrolebinding.yaml deleted file mode 100644 index 0ada25cd68..0000000000 --- a/k8s/charts/openebs/templates/clusterrolebinding.yaml +++ /dev/null @@ -1,19 +0,0 @@ -{{- if .Values.rbac.create }} -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: {{ template "openebs.fullname" . }} - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: {{ template "openebs.fullname" . }} -subjects: -- kind: ServiceAccount - name: {{ template "openebs.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} -{{- end }} diff --git a/k8s/charts/openebs/templates/cm-node-disk-manager.yaml b/k8s/charts/openebs/templates/cm-node-disk-manager.yaml deleted file mode 100644 index 165eabb508..0000000000 --- a/k8s/charts/openebs/templates/cm-node-disk-manager.yaml +++ /dev/null @@ -1,46 +0,0 @@ -{{- if .Values.ndm.enabled }} -# This is the node-disk-manager related config. -# It can be used to customize the disks probes and filters -apiVersion: v1 -kind: ConfigMap -metadata: - name: {{ template "openebs.fullname" . }}-ndm-config - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: ndm-config - openebs.io/component-name: ndm-config -data: - # udev-probe is default or primary probe which should be enabled to run ndm - # filterconfigs contains configs of filters - in the form of include - # and exclude comma separated strings - node-disk-manager.config: | - probeconfigs: - - key: udev-probe - name: udev probe - state: true - - key: seachest-probe - name: seachest probe - state: {{ .Values.ndm.probes.enableSeachest }} - - key: smart-probe - name: smart probe - state: true - filterconfigs: - - key: os-disk-exclude-filter - name: os disk exclude filter - state: {{ .Values.ndm.filters.enableOsDiskExcludeFilter }} - exclude: "/,/etc/hosts,/boot" - - key: vendor-filter - name: vendor filter - state: {{ .Values.ndm.filters.enableVendorFilter }} - include: "" - exclude: "{{ .Values.ndm.filters.excludeVendors }}" - - key: path-filter - name: path filter - state: {{ .Values.ndm.filters.enablePathFilter }} - include: "{{ .Values.ndm.filters.includePaths }}" - exclude: "{{ .Values.ndm.filters.excludePaths }}" ---- -{{- end }} diff --git a/k8s/charts/openebs/templates/daemonset-ndm.yaml b/k8s/charts/openebs/templates/daemonset-ndm.yaml deleted file mode 100644 index d60d640af6..0000000000 --- a/k8s/charts/openebs/templates/daemonset-ndm.yaml +++ /dev/null @@ -1,137 +0,0 @@ -{{- if .Values.ndm.enabled }} -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: {{ template "openebs.fullname" . }}-ndm - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: ndm - openebs.io/component-name: ndm - openebs.io/version: {{ .Values.release.version }} -spec: - updateStrategy: - type: "RollingUpdate" - selector: - matchLabels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: ndm - template: - metadata: - labels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: ndm - openebs.io/component-name: ndm - name: openebs-ndm - openebs.io/version: {{ .Values.release.version }} - spec: - serviceAccountName: {{ template "openebs.serviceAccountName" . }} - hostNetwork: true - containers: - - name: {{ template "openebs.name" . }}-ndm - image: "{{ .Values.ndm.image }}:{{ .Values.ndm.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - securityContext: - privileged: true - env: - # namespace in which NDM is installed will be passed to NDM Daemonset - # as environment variable - - name: NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # pass hostname as env variable using downward API to the NDM container - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName -{{- if .Values.ndm.sparse }} -{{- if .Values.ndm.sparse.path }} - # specify the directory where the sparse files need to be created. - # if not specified, then sparse files will not be created. - - name: SPARSE_FILE_DIR - value: "{{ .Values.ndm.sparse.path }}" -{{- end }} -{{- if .Values.ndm.sparse.size }} - # Size(bytes) of the sparse file to be created. - - name: SPARSE_FILE_SIZE - value: "{{ .Values.ndm.sparse.size }}" -{{- end }} -{{- if .Values.ndm.sparse.count }} - # Specify the number of sparse files to be created - - name: SPARSE_FILE_COUNT - value: "{{ .Values.ndm.sparse.count }}" -{{- end }} -{{- end }} - livenessProbe: - exec: - command: - - pgrep - - ".*ndm" - initialDelaySeconds: {{ .Values.ndm.healthCheck.initialDelaySeconds }} - periodSeconds: {{ .Values.ndm.healthCheck.periodSeconds }} - volumeMounts: - - name: config - mountPath: /host/node-disk-manager.config - subPath: node-disk-manager.config - readOnly: true - - name: udev - mountPath: /run/udev - - name: procmount - mountPath: /host/proc - readOnly: true - - name: basepath - mountPath: /var/openebs/ndm -{{- if .Values.ndm.sparse }} -{{- if .Values.ndm.sparse.path }} - - name: sparsepath - mountPath: {{ .Values.ndm.sparse.path }} -{{- end }} -{{- end }} - volumes: - - name: config - configMap: - name: {{ template "openebs.fullname" . }}-ndm-config - - name: udev - hostPath: - path: /run/udev - type: Directory - # mount /proc (to access mount file of process 1 of host) inside container - # to read mount-point of disks and partitions - - name: procmount - hostPath: - path: /proc - type: Directory - - name: basepath - hostPath: - path: "{{ .Values.persistentStoragePath.baseDir }}/ndm" - type: DirectoryOrCreate -{{- if .Values.ndm.sparse }} -{{- if .Values.ndm.sparse.path }} - - name: sparsepath - hostPath: - path: {{ .Values.ndm.sparse.path }} -{{- end }} -{{- end }} - # By default the node-disk-manager will be run on all kubernetes nodes - # If you would like to limit this to only some nodes, say the nodes - # that have storage attached, you could label those node and use - # nodeSelector. - # - # e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node" - # kubectl label node "openebs.io/nodegroup"="storage-node" - #nodeSelector: - # "openebs.io/nodegroup": "storage-node" -{{- if .Values.ndm.nodeSelector }} - nodeSelector: -{{ toYaml .Values.ndm.nodeSelector | indent 8 }} -{{- end }} -{{- if .Values.ndm.tolerations }} - tolerations: -{{ toYaml .Values.ndm.tolerations | indent 8 }} -{{- end }} -{{- end }} diff --git a/k8s/charts/openebs/templates/deployment-admission-server.yaml b/k8s/charts/openebs/templates/deployment-admission-server.yaml deleted file mode 100644 index cfe1978f06..0000000000 --- a/k8s/charts/openebs/templates/deployment-admission-server.yaml +++ /dev/null @@ -1,57 +0,0 @@ -{{- if .Values.webhook.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ template "openebs.fullname" . }}-admission-server - labels: - app: admission-webhook - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: admission-webhook - openebs.io/component-name: admission-webhook - openebs.io/version: {{ .Values.release.version }} -spec: - replicas: {{ .Values.webhook.replicas }} - strategy: - type: "Recreate" - rollingUpdate: null - selector: - matchLabels: - app: admission-webhook - template: - metadata: - labels: - app: admission-webhook - name: admission-webhook - release: {{ .Release.Name }} - openebs.io/version: {{ .Values.release.version }} - openebs.io/component-name: admission-webhook - spec: -{{- if .Values.webhook.nodeSelector }} - nodeSelector: -{{ toYaml .Values.webhook.nodeSelector | indent 8 }} -{{- end }} -{{- if .Values.webhook.tolerations }} - tolerations: -{{ toYaml .Values.webhook.tolerations | indent 8 }} -{{- end }} -{{- if .Values.webhook.affinity }} - affinity: -{{ toYaml .Values.webhook.affinity | indent 8 }} -{{- end }} - serviceAccountName: {{ template "openebs.serviceAccountName" . }} - containers: - - name: admission-webhook - image: "{{ .Values.webhook.image }}:{{ .Values.webhook.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - args: - - -alsologtostderr - - -v=2 - - 2>&1 - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace -{{- end }} diff --git a/k8s/charts/openebs/templates/deployment-local-provisioner.yaml b/k8s/charts/openebs/templates/deployment-local-provisioner.yaml deleted file mode 100644 index b22a269b7b..0000000000 --- a/k8s/charts/openebs/templates/deployment-local-provisioner.yaml +++ /dev/null @@ -1,92 +0,0 @@ -{{- if .Values.localprovisioner.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ template "openebs.fullname" . }}-localpv-provisioner - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: localpv-provisioner - openebs.io/component-name: openebs-localpv-provisioner - openebs.io/version: {{ .Values.release.version }} -spec: - replicas: {{ .Values.localprovisioner.replicas }} - strategy: - type: "Recreate" - rollingUpdate: null - selector: - matchLabels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - template: - metadata: - labels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: localpv-provisioner - name: openebs-localpv-provisioner - openebs.io/component-name: openebs-localpv-provisioner - openebs.io/version: {{ .Values.release.version }} - spec: - serviceAccountName: {{ template "openebs.serviceAccountName" . }} - containers: - - name: {{ template "openebs.name" . }}-localpv-provisioner - image: "{{ .Values.localprovisioner.image }}:{{ .Values.localprovisioner.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - env: - # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://10.128.0.12:8080" - # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - # OPENEBS_NAMESPACE is the namespace that this provisioner will - # lookup to find maya api service - - name: OPENEBS_NAMESPACE - value: "{{ .Release.Namespace }}" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as - # environment variable - - name: OPENEBS_SERVICE_ACCOUNT - valueFrom: - fieldRef: - fieldPath: spec.serviceAccountName - # OPENEBS_IO_BASE_PATH is the environment variable that provides the - # default base path on the node where host-path PVs will be provisioned. - - name: OPENEBS_IO_ENABLE_ANALYTICS - value: "{{ .Values.analytics.enabled }}" - - name: OPENEBS_IO_BASE_PATH - value: "{{ .Values.localprovisioner.basePath }}" - - name: OPENEBS_IO_HELPER_IMAGE - value: "{{ .Values.helper.image }}:{{ .Values.helper.imageTag }}" - - name: OPENEBS_IO_INSTALLER_TYPE - value: "charts-helm" - livenessProbe: - exec: - command: - - pgrep - - ".*localpv" - initialDelaySeconds: {{ .Values.localprovisioner.healthCheck.initialDelaySeconds }} - periodSeconds: {{ .Values.localprovisioner.healthCheck.periodSeconds }} -{{- if .Values.localprovisioner.nodeSelector }} - nodeSelector: -{{ toYaml .Values.localprovisioner.nodeSelector | indent 8 }} -{{- end }} -{{- if .Values.localprovisioner.tolerations }} - tolerations: -{{ toYaml .Values.localprovisioner.tolerations | indent 8 }} -{{- end }} -{{- if .Values.localprovisioner.affinity }} - affinity: -{{ toYaml .Values.localprovisioner.affinity | indent 8 }} -{{- end }} -{{- end }} diff --git a/k8s/charts/openebs/templates/deployment-maya-apiserver.yaml b/k8s/charts/openebs/templates/deployment-maya-apiserver.yaml deleted file mode 100644 index 6204b5f6f2..0000000000 --- a/k8s/charts/openebs/templates/deployment-maya-apiserver.yaml +++ /dev/null @@ -1,161 +0,0 @@ -{{- if .Values.apiserver.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ template "openebs.fullname" . }}-apiserver - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: apiserver - name: maya-apiserver - openebs.io/component-name: maya-apiserver - openebs.io/version: {{ .Values.release.version }} -spec: - replicas: {{ .Values.apiserver.replicas }} - strategy: - type: "Recreate" - rollingUpdate: null - selector: - matchLabels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - template: - metadata: - labels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: apiserver - name: maya-apiserver - openebs.io/component-name: maya-apiserver - openebs.io/version: {{ .Values.release.version }} - spec: - serviceAccountName: {{ template "openebs.serviceAccountName" . }} - containers: - - name: {{ template "openebs.name" . }}-apiserver - image: "{{ .Values.apiserver.image }}:{{ .Values.apiserver.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - ports: - - containerPort: {{ .Values.apiserver.ports.internalPort }} - env: - # OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for maya api server version 0.5.2 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - # OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for maya api server version 0.5.2 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://172.28.128.3:8080" - # OPENEBS_NAMESPACE provides the namespace of this deployment as an - # environment variable - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as - # environment variable - - name: OPENEBS_SERVICE_ACCOUNT - valueFrom: - fieldRef: - fieldPath: spec.serviceAccountName - # OPENEBS_MAYA_POD_NAME provides the name of this pod as - # environment variable - - name: OPENEBS_MAYA_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - # If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default - # storageclass and storagepool will not be created. - - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - value: "{{ .Values.defaultStorageConfig.enabled }}" - # OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be - # configured as a part of openebs installation. - # If "true" a default cstor sparse pool will be configured, if "false" it will not be configured. - # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - # is set to true - - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL - value: "{{ .Values.apiserver.sparse.enabled }}" - # OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath - # to be used for saving the shared content between the side cars - # of cstor volume pod. - # The default path used is /var/openebs/sparse - - name: OPENEBS_IO_CSTOR_TARGET_DIR - value: "{{ .Values.ndm.sparse.path }}" - # OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath - # to be used for saving the shared content between the side cars - # of cstor pool pod. This ENV is also used to indicate the location - # of the sparse devices. - # The default path used is /var/openebs/sparse - - name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR - value: "{{ .Values.ndm.sparse.path }}" - # OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath - # to be used for default Jiva StoragePool loaded by OpenEBS - # The default path used is /var/openebs - # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - # is set to true - - name: OPENEBS_IO_JIVA_POOL_DIR - value: "{{ .Values.jiva.defaultStoragePath }}" - # OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath - # to be used for default openebs-hostpath storageclass loaded by OpenEBS - # The default path used is /var/openebs/local - # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - # is set to true - - name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR - value: "{{ .Values.localprovisioner.basePath }}" - # OPENEBS_IO_BASE_DIR used to specify base directory to store OpenEBS - # related files - - name: OPENEBS_IO_BASE_DIR - value: "{{ .Values.persistentStoragePath.baseDir }}" - - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE - value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}" - - name: OPENEBS_IO_JIVA_REPLICA_IMAGE - value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}" - - name: OPENEBS_IO_JIVA_REPLICA_COUNT - value: "{{ .Values.jiva.replicas }}" - - name: OPENEBS_IO_CSTOR_TARGET_IMAGE - value: "{{ .Values.cstor.target.image }}:{{ .Values.cstor.target.imageTag }}" - - name: OPENEBS_IO_CSTOR_POOL_IMAGE - value: "{{ .Values.cstor.pool.image }}:{{ .Values.cstor.pool.imageTag }}" - - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE - value: "{{ .Values.cstor.poolMgmt.image }}:{{ .Values.cstor.poolMgmt.imageTag }}" - - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE - value: "{{ .Values.cstor.volumeMgmt.image }}:{{ .Values.cstor.volumeMgmt.imageTag }}" - - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE - value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}" - - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE - value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}" - - name: OPENEBS_IO_HELPER_IMAGE - value: "{{ .Values.helper.image }}:{{ .Values.helper.imageTag }}" - # OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage - # events to Google Analytics - - name: OPENEBS_IO_ENABLE_ANALYTICS - value: "{{ .Values.analytics.enabled }}" - # OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours) - # for periodic ping events sent to Google Analytics. Default is 24 hours. - - name: OPENEBS_IO_ANALYTICS_PING_INTERVAL - value: "{{ .Values.analytics.pingInterval }}" - - name: OPENEBS_IO_INSTALLER_TYPE - value: "charts-helm" - livenessProbe: - exec: - command: - - /usr/local/bin/mayactl - - version - initialDelaySeconds: {{ .Values.apiserver.healthCheck.initialDelaySeconds }} - periodSeconds: {{ .Values.apiserver.healthCheck.periodSeconds }} -{{- if .Values.apiserver.nodeSelector }} - nodeSelector: -{{ toYaml .Values.apiserver.nodeSelector | indent 8 }} -{{- end }} -{{- if .Values.apiserver.tolerations }} - tolerations: -{{ toYaml .Values.apiserver.tolerations | indent 8 }} -{{- end }} -{{- if .Values.apiserver.affinity }} - affinity: -{{ toYaml .Values.apiserver.affinity | indent 8 }} -{{- end }} -{{- end }} diff --git a/k8s/charts/openebs/templates/deployment-maya-provisioner.yaml b/k8s/charts/openebs/templates/deployment-maya-provisioner.yaml deleted file mode 100644 index b1173dd9f9..0000000000 --- a/k8s/charts/openebs/templates/deployment-maya-provisioner.yaml +++ /dev/null @@ -1,91 +0,0 @@ -{{- if .Values.provisioner.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ template "openebs.fullname" . }}-provisioner - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: provisioner - name: openebs-provisioner - openebs.io/component-name: openebs-provisioner - openebs.io/version: {{ .Values.release.version }} -spec: - replicas: {{ .Values.provisioner.replicas }} - strategy: - type: "Recreate" - rollingUpdate: null - selector: - matchLabels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - template: - metadata: - labels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: provisioner - name: openebs-provisioner - openebs.io/component-name: openebs-provisioner - openebs.io/version: {{ .Values.release.version }} - spec: - serviceAccountName: {{ template "openebs.serviceAccountName" . }} - containers: - - name: {{ template "openebs.name" . }}-provisioner - image: "{{ .Values.provisioner.image }}:{{ .Values.provisioner.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - env: - # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://10.128.0.12:8080" - # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - # OPENEBS_NAMESPACE is the namespace that this provisioner will - # lookup to find maya api service - - name: OPENEBS_NAMESPACE - value: "{{ .Release.Namespace }}" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, - # that provisioner should forward the volume create/delete requests. - # If not present, "maya-apiserver-service" will be used for lookup. - # This is supported for openebs provisioner version 0.5.3-RC1 onwards - - name: OPENEBS_MAYA_SERVICE_NAME - value: "{{ template "openebs.fullname" . }}-apiservice" - # The following values will be set as annotations to the PV object. - # Refer : https://github.com/openebs/external-storage/pull/15 - #- name: OPENEBS_MONITOR_URL - # value: "{{ .Values.provisioner.monitorUrl }}" - #- name: OPENEBS_MONITOR_VOLKEY - # value: "{{ .Values.provisioner.monitorVolumeKey }}" - #- name: MAYA_PORTAL_URL - # value: "{{ .Values.provisioner.mayaPortalUrl }}" - livenessProbe: - exec: - command: - - pgrep - - ".*openebs" - initialDelaySeconds: {{ .Values.provisioner.healthCheck.initialDelaySeconds }} - periodSeconds: {{ .Values.provisioner.healthCheck.periodSeconds }} -{{- if .Values.provisioner.nodeSelector }} - nodeSelector: -{{ toYaml .Values.provisioner.nodeSelector | indent 8 }} -{{- end }} -{{- if .Values.provisioner.tolerations }} - tolerations: -{{ toYaml .Values.provisioner.tolerations | indent 8 }} -{{- end }} -{{- if .Values.provisioner.affinity }} - affinity: -{{ toYaml .Values.provisioner.affinity | indent 8 }} -{{- end }} -{{- end }} diff --git a/k8s/charts/openebs/templates/deployment-maya-snapshot-operator.yaml b/k8s/charts/openebs/templates/deployment-maya-snapshot-operator.yaml deleted file mode 100644 index 98c3327a1c..0000000000 --- a/k8s/charts/openebs/templates/deployment-maya-snapshot-operator.yaml +++ /dev/null @@ -1,117 +0,0 @@ -{{- if .Values.snapshotOperator.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ template "openebs.fullname" . }}-snapshot-operator - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: snapshot-operator - openebs.io/component-name: openebs-snapshot-operator - openebs.io/version: {{ .Values.release.version }} -spec: - replicas: {{ .Values.snapshotOperator.replicas }} - selector: - matchLabels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - strategy: - type: "Recreate" - rollingUpdate: null - template: - metadata: - labels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: snapshot-operator - name: openebs-snapshot-operator - openebs.io/version: {{ .Values.release.version }} - openebs.io/component-name: openebs-snapshot-operator - spec: - serviceAccountName: {{ template "openebs.serviceAccountName" . }} - containers: - - name: {{ template "openebs.name" . }}-snapshot-controller - image: "{{ .Values.snapshotOperator.controller.image }}:{{ .Values.snapshotOperator.controller.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - env: - # OPENEBS_IO_K8S_MASTER enables openebs snapshot controller to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for openebs snapshot controller version 0.6-RC1 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://10.128.0.12:8080" - # OPENEBS_IO_KUBE_CONFIG enables openebs snapshot controller to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for openebs snapshot controller version 0.6-RC1 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - # OPENEBS_NAMESPACE is the namespace that this snapshot controller will - # lookup to find maya api service - - name: OPENEBS_NAMESPACE - value: "{{ .Release.Namespace }}" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, - # that snapshot controller should forward the volume snapshot requests. - # If not present, "maya-apiserver-service" will be used for lookup. - # This is supported for openebs snapshot controller version 0.6-RC1 onwards - - name: OPENEBS_MAYA_SERVICE_NAME - value: "{{ template "openebs.fullname" . }}-apiservice" - livenessProbe: - exec: - command: - - pgrep - - ".*controller" - initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }} - periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }} - - name: {{ template "openebs.name" . }}-snapshot-provisioner - image: "{{ .Values.snapshotOperator.provisioner.image }}:{{ .Values.snapshotOperator.provisioner.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - env: - # OPENEBS_IO_K8S_MASTER enables openebs snapshot provisioner to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for openebs snapshot provisioner version 0.6-RC1 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://10.128.0.12:8080" - # OPENEBS_IO_KUBE_CONFIG enables openebs snapshot provisioner to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for openebs snapshot provisioner version 0.6-RC1 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - # OPENEBS_NAMESPACE is the namespace that this snapshot provisioner will - # lookup to find maya api service - - name: OPENEBS_NAMESPACE - value: "{{ .Release.Namespace }}" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, - # that snapshot provisioner should forward the volume snapshot PV requests. - # If not present, "maya-apiserver-service" will be used for lookup. - # This is supported for openebs snapshot provisioner version 0.6-RC1 onwards - - name: OPENEBS_MAYA_SERVICE_NAME - value: "{{ template "openebs.fullname" . }}-apiservice" - livenessProbe: - exec: - command: - - pgrep - - ".*provisioner" - initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }} - periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }} -{{- if .Values.snapshotOperator.nodeSelector }} - nodeSelector: -{{ toYaml .Values.snapshotOperator.nodeSelector | indent 8 }} -{{- end }} -{{- if .Values.snapshotOperator.tolerations }} - tolerations: -{{ toYaml .Values.snapshotOperator.tolerations | indent 8 }} -{{- end }} -{{- if .Values.snapshotOperator.affinity }} - affinity: -{{ toYaml .Values.snapshotOperator.affinity | indent 8 }} -{{- end }} -{{- end }} diff --git a/k8s/charts/openebs/templates/deployment-ndm-operator.yaml b/k8s/charts/openebs/templates/deployment-ndm-operator.yaml deleted file mode 100644 index 3432e0f52d..0000000000 --- a/k8s/charts/openebs/templates/deployment-ndm-operator.yaml +++ /dev/null @@ -1,69 +0,0 @@ -{{- if .Values.ndmOperator.enabled }} ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ template "openebs.fullname" . }}-ndm-operator - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - component: ndm-operator - openebs.io/component-name: ndm-operator - openebs.io/version: {{ .Values.release.version }} - name: ndm-operator -spec: - replicas: {{ .Values.ndmOperator.replicas }} - strategy: - type: "Recreate" - rollingUpdate: null - selector: - matchLabels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - template: - metadata: - labels: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: ndm-operator - name: ndm-operator - openebs.io/component-name: ndm-operator - openebs.io/version: {{ .Values.release.version }} - spec: - serviceAccountName: {{ template "openebs.serviceAccountName" . }} - containers: - - name: {{ template "openebs.fullname" . }}-ndm-operator - image: "{{ .Values.ndmOperator.image }}:{{ .Values.ndmOperator.imageTag }}" - imagePullPolicy: {{ .Values.image.pullPolicy }} - readinessProbe: - exec: - command: - - stat - - /tmp/operator-sdk-ready - initialDelaySeconds: {{ .Values.ndmOperator.readinessCheck.initialDelaySeconds }} - periodSeconds: {{ .Values.ndmOperator.readinessCheck.periodSeconds }} - failureThreshold: {{ .Values.ndmOperator.readinessCheck.failureThreshold }} - env: - - name: WATCH_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: OPERATOR_NAME - value: "node-disk-operator" - - name: CLEANUP_JOB_IMAGE - value: "{{ .Values.helper.image }}:{{ .Values.helper.imageTag }}" -{{- if .Values.ndmOperator.nodeSelector }} - nodeSelector: -{{ toYaml .Values.ndmOperator.nodeSelector | indent 8 }} -{{- end }} -{{- if .Values.ndmOperator.tolerations }} - tolerations: -{{ toYaml .Values.ndmOperator.tolerations | indent 8 }} -{{- end }} -{{- end }} diff --git a/k8s/charts/openebs/templates/psp-clusterrole.yaml b/k8s/charts/openebs/templates/psp-clusterrole.yaml deleted file mode 100644 index a6c4807dd6..0000000000 --- a/k8s/charts/openebs/templates/psp-clusterrole.yaml +++ /dev/null @@ -1,14 +0,0 @@ -{{- if and .Values.rbac.create .Values.rbac.pspEnabled }} -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: {{ template "openebs.fullname" . }}-psp - labels: - app: {{ template "openebs.name" . }} -rules: -- apiGroups: ['extensions'] - resources: ['podsecuritypolicies'] - verbs: ['use'] - resourceNames: - - {{ template "openebs.fullname" . }}-psp -{{- end }} diff --git a/k8s/charts/openebs/templates/psp-clusterrolebinding.yaml b/k8s/charts/openebs/templates/psp-clusterrolebinding.yaml deleted file mode 100644 index 5a4205877b..0000000000 --- a/k8s/charts/openebs/templates/psp-clusterrolebinding.yaml +++ /dev/null @@ -1,17 +0,0 @@ -{{- if and .Values.rbac.create .Values.rbac.pspEnabled }} -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: {{ template "openebs.fullname" . }}-psp - labels: - app: {{ template "openebs.name" . }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: {{ template "openebs.fullname" . }}-psp -subjects: - - kind: ServiceAccount - name: {{ template "openebs.serviceAccountName" . }} - namespace: {{ $.Release.Namespace }} -{{- end }} - diff --git a/k8s/charts/openebs/templates/psp.yaml b/k8s/charts/openebs/templates/psp.yaml deleted file mode 100644 index 0442f0e5d7..0000000000 --- a/k8s/charts/openebs/templates/psp.yaml +++ /dev/null @@ -1,28 +0,0 @@ -{{- if and .Values.rbac.create .Values.rbac.pspEnabled }} -apiVersion: policy/v1beta1 -kind: PodSecurityPolicy -metadata: - name: {{ template "openebs.fullname" . }}-psp - namespace: {{ $.Release.Namespace }} - labels: - app: {{ template "openebs.name" . }} -spec: - privileged: true - allowPrivilegeEscalation: true - allowedCapabilities: ['*'] - volumes: ['*'] - hostNetwork: true - hostPorts: - - min: 0 - max: 65535 - hostIPC: true - hostPID: true - runAsUser: - rule: 'RunAsAny' - seLinux: - rule: 'RunAsAny' - supplementalGroups: - rule: 'RunAsAny' - fsGroup: - rule: 'RunAsAny' -{{- end }} diff --git a/k8s/charts/openebs/templates/service-maya-apiserver.yaml b/k8s/charts/openebs/templates/service-maya-apiserver.yaml deleted file mode 100644 index d44bcb0f83..0000000000 --- a/k8s/charts/openebs/templates/service-maya-apiserver.yaml +++ /dev/null @@ -1,23 +0,0 @@ -{{- if .Values.apiserver.enabled }} -apiVersion: v1 -kind: Service -metadata: - name: {{ template "openebs.fullname" . }}-apiservice - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} - openebs.io/component-name: maya-apiserver-svc -spec: - ports: - - name: api - port: {{ .Values.apiserver.ports.externalPort }} - targetPort: {{ .Values.apiserver.ports.internalPort }} - protocol: TCP - selector: - app: {{ template "openebs.name" . }} - release: {{ .Release.Name }} - component: apiserver - sessionAffinity: None -{{- end }} diff --git a/k8s/charts/openebs/templates/serviceaccount.yaml b/k8s/charts/openebs/templates/serviceaccount.yaml deleted file mode 100644 index 31a500455c..0000000000 --- a/k8s/charts/openebs/templates/serviceaccount.yaml +++ /dev/null @@ -1,11 +0,0 @@ -{{- if .Values.serviceAccount.create }} -apiVersion: v1 -kind: ServiceAccount -metadata: - name: {{ template "openebs.serviceAccountName" . }} - labels: - app: {{ template "openebs.name" . }} - chart: {{ template "openebs.chart" . }} - release: {{ .Release.Name }} - heritage: {{ .Release.Service }} -{{- end }} diff --git a/k8s/charts/openebs/values.yaml b/k8s/charts/openebs/values.yaml deleted file mode 100644 index d1734e0a35..0000000000 --- a/k8s/charts/openebs/values.yaml +++ /dev/null @@ -1,167 +0,0 @@ -# Default values for openebs. -# This is a YAML-formatted file. -# Declare variables to be passed into your templates. - -rbac: - # Specifies whether RBAC resources should be created - create: true - pspEnabled: false - -serviceAccount: - create: true - name: - -release: - # "openebs.io/version" label for control plane components - version: "1.6.0" - -image: - pullPolicy: IfNotPresent - -apiserver: - enabled: true - image: "quay.io/openebs/m-apiserver" - imageTag: "1.6.0" - replicas: 1 - ports: - externalPort: 5656 - internalPort: 5656 - sparse: - enabled: "false" - nodeSelector: {} - tolerations: [] - affinity: {} - healthCheck: - initialDelaySeconds: 30 - periodSeconds: 60 - -defaultStorageConfig: - enabled: "true" - -persistentStoragePath: - # baseDir is the value used to store openebs related files in this base - # directory - baseDir: "/var/openebs" - -provisioner: - enabled: true - image: "quay.io/openebs/openebs-k8s-provisioner" - imageTag: "1.6.0" - replicas: 1 - nodeSelector: {} - tolerations: [] - affinity: {} - healthCheck: - initialDelaySeconds: 30 - periodSeconds: 60 - -localprovisioner: - enabled: true - image: "quay.io/openebs/provisioner-localpv" - imageTag: "1.6.0" - replicas: 1 - basePath: "/var/openebs/local" - nodeSelector: {} - tolerations: [] - affinity: {} - healthCheck: - initialDelaySeconds: 30 - periodSeconds: 60 - -snapshotOperator: - enabled: true - controller: - image: "quay.io/openebs/snapshot-controller" - imageTag: "1.6.0" - provisioner: - image: "quay.io/openebs/snapshot-provisioner" - imageTag: "1.6.0" - replicas: 1 - upgradeStrategy: "Recreate" - nodeSelector: {} - tolerations: [] - affinity: {} - healthCheck: - initialDelaySeconds: 30 - periodSeconds: 60 - -ndm: - enabled: true - image: "quay.io/openebs/node-disk-manager-amd64" - imageTag: "v0.4.6" - sparse: - path: "/var/openebs/sparse" - size: "10737418240" - count: "0" - filters: - enableOsDiskExcludeFilter: true - enableVendorFilter: true - excludeVendors: "CLOUDBYT,OpenEBS" - enablePathFilter: true - includePaths: "" - excludePaths: "loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md" - probes: - enableSeachest: false - nodeSelector: {} - tolerations: [] - healthCheck: - initialDelaySeconds: 30 - periodSeconds: 60 - -ndmOperator: - enabled: true - image: "quay.io/openebs/node-disk-operator-amd64" - imageTag: "v0.4.6" - replicas: 1 - upgradeStrategy: Recreate - nodeSelector: {} - tolerations: [] - readinessCheck: - initialDelaySeconds: 4 - periodSeconds: 10 - failureThreshold: 1 - -webhook: - enabled: true - image: "quay.io/openebs/admission-server" - imageTag: "1.6.0" - failurePolicy: Ignore - replicas: 1 - nodeSelector: {} - tolerations: [] - affinity: {} - -jiva: - image: "quay.io/openebs/jiva" - imageTag: "1.6.0" - replicas: 3 - defaultStoragePath: "/var/openebs" - -cstor: - pool: - image: "quay.io/openebs/cstor-pool" - imageTag: "1.6.0" - poolMgmt: - image: "quay.io/openebs/cstor-pool-mgmt" - imageTag: "1.6.0" - target: - image: "quay.io/openebs/cstor-istgt" - imageTag: "1.6.0" - volumeMgmt: - image: "quay.io/openebs/cstor-volume-mgmt" - imageTag: "1.6.0" - -helper: - image: "quay.io/openebs/linux-utils" - imageTag: "1.6.0" - -policies: - monitoring: - enabled: true - image: "quay.io/openebs/m-exporter" - imageTag: "1.6.0" - -analytics: - enabled: true - # Specify in hours the duration after which a ping event needs to be sent. - pingInterval: "24h" diff --git a/k8s/ci/maya/snapshot/cstor/busybox.yaml b/k8s/ci/maya/snapshot/cstor/busybox.yaml deleted file mode 100644 index fcaf1d78e2..0000000000 --- a/k8s/ci/maya/snapshot/cstor/busybox.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: busybox-cstor - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'date > /mnt/store1/date.txt; hostname >> /mnt/store1/hostname.txt; sync; sleep 5; sync; tail -f /dev/null;' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: cstor-vol1-1r-claim ---- diff --git a/k8s/ci/maya/snapshot/cstor/busybox_clone.yaml b/k8s/ci/maya/snapshot/cstor/busybox_clone.yaml deleted file mode 100644 index 72c7f53e2a..0000000000 --- a/k8s/ci/maya/snapshot/cstor/busybox_clone.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: busybox-clone-cstor - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'tail -f /dev/null' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store2 - name: demo-snap-vol - volumes: - - name: demo-snap-vol - persistentVolumeClaim: - claimName: demo-snap-vol-claim-cstor diff --git a/k8s/ci/maya/snapshot/cstor/busybox_clone_ns.yaml b/k8s/ci/maya/snapshot/cstor/busybox_clone_ns.yaml deleted file mode 100644 index 931a03ba10..0000000000 --- a/k8s/ci/maya/snapshot/cstor/busybox_clone_ns.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: busybox-clone-cstor-ns - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'tail -f /dev/null' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store2 - name: demo-snap-vol - volumes: - - name: demo-snap-vol - persistentVolumeClaim: - claimName: demo-snap-vol-claim-cstor-ns diff --git a/k8s/ci/maya/snapshot/cstor/busybox_ns.yaml b/k8s/ci/maya/snapshot/cstor/busybox_ns.yaml deleted file mode 100644 index 879575578e..0000000000 --- a/k8s/ci/maya/snapshot/cstor/busybox_ns.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: busybox-cstor-ns - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'date > /mnt/store1/date.txt; hostname >> /mnt/store1/hostname.txt; sync; sleep 5; sync; tail -f /dev/null;' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: openebs-pvc-in-custom-ns ---- diff --git a/k8s/ci/maya/snapshot/cstor/snapshot.yaml b/k8s/ci/maya/snapshot/cstor/snapshot.yaml deleted file mode 100644 index f9a9be4f7e..0000000000 --- a/k8s/ci/maya/snapshot/cstor/snapshot.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: volumesnapshot.external-storage.k8s.io/v1 -kind: VolumeSnapshot -metadata: - name: snapshot-demo-cstor - namespace: default -spec: - persistentVolumeClaimName: cstor-vol1-1r-claim \ No newline at end of file diff --git a/k8s/ci/maya/snapshot/cstor/snapshot_claim.yaml b/k8s/ci/maya/snapshot/cstor/snapshot_claim.yaml deleted file mode 100644 index f94dd5dbad..0000000000 --- a/k8s/ci/maya/snapshot/cstor/snapshot_claim.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: demo-snap-vol-claim-cstor - namespace: default - annotations: - snapshot.alpha.kubernetes.io/snapshot: snapshot-demo-cstor -spec: - storageClassName: openebs-snapshot-promoter - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 4G diff --git a/k8s/ci/maya/snapshot/cstor/snapshot_claim_ns.yaml b/k8s/ci/maya/snapshot/cstor/snapshot_claim_ns.yaml deleted file mode 100644 index 89de795717..0000000000 --- a/k8s/ci/maya/snapshot/cstor/snapshot_claim_ns.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: demo-snap-vol-claim-cstor-ns - namespace: default - annotations: - snapshot.alpha.kubernetes.io/snapshot: snapshot-demo-cstor-ns -spec: - storageClassName: openebs-snapshot-promoter - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 1G diff --git a/k8s/ci/maya/snapshot/cstor/snapshot_ns.yaml b/k8s/ci/maya/snapshot/cstor/snapshot_ns.yaml deleted file mode 100644 index 77db49e498..0000000000 --- a/k8s/ci/maya/snapshot/cstor/snapshot_ns.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: volumesnapshot.external-storage.k8s.io/v1 -kind: VolumeSnapshot -metadata: - name: snapshot-demo-cstor-ns - namespace: default -spec: - persistentVolumeClaimName: openebs-pvc-in-custom-ns \ No newline at end of file diff --git a/k8s/ci/maya/snapshot/jiva/busybox.yaml b/k8s/ci/maya/snapshot/jiva/busybox.yaml deleted file mode 100644 index 8a995455f2..0000000000 --- a/k8s/ci/maya/snapshot/jiva/busybox.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: busybox-jiva - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'date > /mnt/store1/date.txt; hostname >> /mnt/store1/hostname.txt; sync; sleep 5; sync; tail -f /dev/null;' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: demo-vol1-claim ---- diff --git a/k8s/ci/maya/snapshot/jiva/busybox_clone.yaml b/k8s/ci/maya/snapshot/jiva/busybox_clone.yaml deleted file mode 100644 index 17d7bf0386..0000000000 --- a/k8s/ci/maya/snapshot/jiva/busybox_clone.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: busybox-clone-jiva - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'tail -f /dev/null' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store2 - name: demo-snap-vol - volumes: - - name: demo-snap-vol - persistentVolumeClaim: - claimName: demo-snap-vol-claim-jiva diff --git a/k8s/ci/maya/snapshot/jiva/snapshot.yaml b/k8s/ci/maya/snapshot/jiva/snapshot.yaml deleted file mode 100644 index 09f4afaa03..0000000000 --- a/k8s/ci/maya/snapshot/jiva/snapshot.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: volumesnapshot.external-storage.k8s.io/v1 -kind: VolumeSnapshot -metadata: - name: snapshot-demo-jiva - namespace: default -spec: - persistentVolumeClaimName: demo-vol1-claim diff --git a/k8s/ci/maya/snapshot/jiva/snapshot_claim.yaml b/k8s/ci/maya/snapshot/jiva/snapshot_claim.yaml deleted file mode 100644 index b0123b7904..0000000000 --- a/k8s/ci/maya/snapshot/jiva/snapshot_claim.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: demo-snap-vol-claim-jiva - namespace: default - annotations: - snapshot.alpha.kubernetes.io/snapshot: snapshot-demo-jiva -spec: - storageClassName: openebs-snapshot-promoter - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5Gi diff --git a/k8s/ci/maya/volume/cstor/pvc_app_ns.yaml b/k8s/ci/maya/volume/cstor/pvc_app_ns.yaml deleted file mode 100644 index 2497a4e61c..0000000000 --- a/k8s/ci/maya/volume/cstor/pvc_app_ns.yaml +++ /dev/null @@ -1,10 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: openebs-pvc-in-custom-ns -spec: - storageClassName: openebs-cstor-override-ns - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 1G \ No newline at end of file diff --git a/k8s/ci/maya/volume/cstor/sc_app_ns.yaml b/k8s/ci/maya/volume/cstor/sc_app_ns.yaml deleted file mode 100644 index bb1e6f64ac..0000000000 --- a/k8s/ci/maya/volume/cstor/sc_app_ns.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - annotations: - cas.openebs.io/config: | - - name: StoragePoolClaim - value: "sparse-claim-auto" - - name: ReplicaCount - value: "1" - - name: PVCServiceAccountName - value: "user-service-account" - openebs.io/cas-type: cstor - name: openebs-cstor-override-ns -provisioner: openebs.io/provisioner-iscsi \ No newline at end of file diff --git a/k8s/ci/maya/volume/cstor/service-account.yaml b/k8s/ci/maya/volume/cstor/service-account.yaml deleted file mode 100644 index 03a527e460..0000000000 --- a/k8s/ci/maya/volume/cstor/service-account.yaml +++ /dev/null @@ -1,28 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: user-service-account - namespace: default ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: user-privilege-role -rules: -- apiGroups: ["*"] - resources: ["cstorvolumes", "events"] - verbs: ["*"] ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: user-service-account - namespace: default -subjects: -- kind: ServiceAccount - name: user-service-account - namespace: default -roleRef: - kind: ClusterRole - name: user-privilege-role - apiGroup: rbac.authorization.k8s.io diff --git a/k8s/ci/overprovisioning/cstor-sc-overprovisioning-disabled.yaml b/k8s/ci/overprovisioning/cstor-sc-overprovisioning-disabled.yaml deleted file mode 100644 index 53be088bb5..0000000000 --- a/k8s/ci/overprovisioning/cstor-sc-overprovisioning-disabled.yaml +++ /dev/null @@ -1,13 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: cstor-sc-overprovisioning-disabled - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" - - name: StoragePoolClaim - value: "overprovisioning-disabled-sparse-pool" -provisioner: openebs.io/provisioner-iscsi diff --git a/k8s/ci/overprovisioning/overprovisioning-disabled-sparse-pool.yaml b/k8s/ci/overprovisioning/overprovisioning-disabled-sparse-pool.yaml deleted file mode 100644 index b9848c3c06..0000000000 --- a/k8s/ci/overprovisioning/overprovisioning-disabled-sparse-pool.yaml +++ /dev/null @@ -1,14 +0,0 @@ ---- -apiVersion: openebs.io/v1alpha1 -kind: StoragePoolClaim -metadata: - name: overprovisioning-disabled-sparse-pool -spec: - name: overprovisioning-disabled-sparse-pool - type: sparse - maxPools: 1 - minPools: 1 - poolSpec: - poolType: striped - cacheFile: /var/openebs/pool1.cache - thickProvisioning: true diff --git a/k8s/ci/overprovisioning/patch.yaml b/k8s/ci/overprovisioning/patch.yaml deleted file mode 100644 index 37a18befed..0000000000 --- a/k8s/ci/overprovisioning/patch.yaml +++ /dev/null @@ -1,16 +0,0 @@ -# Openebs-operator yaml in ci branch has by default SPARSE_FILE_COUNT set to 1. - -# In order to run overprovisioning test for SPC, a SPC needs to be provisioned -# and hence requires a free block device. - -# This patch files sets SPARSE_FILE_COUNT to 2 from 1 so that the SPC provisioning -# can happen successfully -spec: - template: - spec: - containers: - - name: node-disk-manager - env: - - name: SPARSE_FILE_COUNT - value: "2" - diff --git a/k8s/ci/overprovisioning/pvc10g.yaml b/k8s/ci/overprovisioning/pvc10g.yaml deleted file mode 100644 index 466a529247..0000000000 --- a/k8s/ci/overprovisioning/pvc10g.yaml +++ /dev/null @@ -1,14 +0,0 @@ ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: test-pvc-10gigs - labels: - app: test-app-10gigs -spec: - storageClassName: cstor-sc-overprovisioning-disabled - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10G diff --git a/k8s/ci/overprovisioning/pvc1g.yaml b/k8s/ci/overprovisioning/pvc1g.yaml deleted file mode 100644 index 703ba9bdda..0000000000 --- a/k8s/ci/overprovisioning/pvc1g.yaml +++ /dev/null @@ -1,14 +0,0 @@ ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: test-pvc-1gig - labels: - app: test-app-1gig -spec: - storageClassName: cstor-sc-overprovisioning-disabled - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1G diff --git a/k8s/ci/test-script.sh b/k8s/ci/test-script.sh deleted file mode 100644 index 1072f68b54..0000000000 --- a/k8s/ci/test-script.sh +++ /dev/null @@ -1,701 +0,0 @@ -#!/usr/bin/env bash -# set -x - -wget https://raw.githubusercontent.com/openebs/openebs/${CI_BRANCH}/k8s/openebs-operator.yaml -IMAGE_ORG=${IMAGE_ORG:-openebs} -sed -i "s/quay.io\/openebs/${IMAGE_ORG}/g" openebs-operator.yaml -kubectl apply -f openebs-operator.yaml - -function waitForDeployment() { - DEPLOY=$1 - NS=$2 - - for i in $(seq 1 50) ; do - kubectl get deployment -n ${NS} ${DEPLOY} - replicas=$(kubectl get deployment -n ${NS} ${DEPLOY} -o json | jq ".status.readyReplicas") - if [ "$replicas" == "1" ]; then - break - else - echo "Waiting for ${DEPLOY} to be ready" - if [ ${DEPLOY} != "maya-apiserver" ] && [ ${DEPLOY} != "openebs-provisioner" ]; then - dumpMayaAPIServerLogs 10 - fi - sleep 10 - fi - done -} - -function checkApi() { - printf "\n" - echo $1 - printf "\n" - for i in `seq 1 100`; do - sleep 2 - responseCode=$($1) - echo "Response Code from ApiServer: $responseCode" - if [ $responseCode -ne 200 ]; then - echo "Retrying.... $i" - printf "Logs of api-server: \n\n" - kubectl logs --tail=20 $MAPIPOD -n openebs - printf "\n\n" - else - break - fi - done -} - -function dumpMayaAPIServerLogs() { - LC=$1 - MAPIPOD=$(kubectl get pods -o jsonpath='{.items[?(@.spec.containers[0].name=="maya-apiserver")].metadata.name}' -n openebs) - kubectl logs --tail=${LC} $MAPIPOD -n openebs - printf "\n\n" -} - -waitForDeployment maya-apiserver openebs -waitForDeployment openebs-provisioner openebs -waitForDeployment openebs-ndm-operator openebs -dumpMayaAPIServerLogs 200 - -kubectl get pods --all-namespaces - - -#Print the default cstor pools Created -kubectl get csp - -#Print the default StoragePoolClaim Created -kubectl get spc - -#Print the default StorageClasses Created -kubectl get sc - -sleep 10 -#echo "------------------ Deploy Pre-release features ---------------------------" -#kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-pre-release-features.yaml - -echo "------------------------ Create block device sparse storagepoolclaim --------------- " -# delete the storagepoolclaim created earlier and create new spc with min/max pool -# count 1 -kubectl delete spc --all -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/sample-pv-yamls/spc-sparse-single.yaml -sleep 10 - -echo "--------------- Maya apiserver later logs -----------------------------" -dumpMayaAPIServerLogs 200 - -echo "---------------Run overprovisioning test case for SPC volumes -----------------------------" -# runVolumeOverProvisioningTest function deploys overprovisioning artifacts for test -# and verify the test case for success/failure -runVolumeOverProvisioningTest(){ -deployVolumeOverProvisioningArtifacts -checkForPVC1GStatus -checkForPVC10GStatus -} - -# deployVolumeOverProvisioningArtifacts deploys overprovisioning artifacts -deployVolumeOverProvisioningArtifacts(){ -echo "------------------------ Create block device sparse storagepoolclaim(overprovisioning-disabled-sparse-pool) with overprovisioning restriction on --------------- " -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/overprovisioning/overprovisioning-disabled-sparse-pool.yaml -echo "------------------------ Create storage class referring to spc overprovisioning-disabled-sparse-pool------------------------------------------------------------ " -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/overprovisioning/cstor-sc-overprovisioning-disabled.yaml - -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/overprovisioning/patch.yaml - -echo "------------------------ Patch ndm daemonset to set SPARSE_FILE_COUNT to 2 --------------- " -kubectl patch ds openebs-ndm -n openebs --patch "$(cat patch.yaml)" - -sleep 10 - -echo "Create PVC with 1G capacity request " -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/overprovisioning/pvc1g.yaml -echo "Create PVC with 10G capacity request " -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/overprovisioning/pvc10g.yaml -} - -checkForPVC1GStatus(){ -PVC_NAME=$1 -PVC1G_MAX_RETRY=15 -for i in $(seq 1 $PVC1G_MAX_RETRY) ; do - PVC1GStatus=$(kubectl get pvc test-pvc-1gig --output="jsonpath={.status.phase}") - if [ "$PVC1GStatus" == "Bound" ]; then - echo "PVC test-pvc-1gig bound successfully" - break - else - echo "Waiting for PVC test-pvc-1gig to be bound" - kubectl get pvc test-pvc-1gig - if [ "$i" == "$PVC1G_MAX_RETRY" ] && [ "$PVC1GStatus" != "Bound" ]; then - echo "PVC test-pvc-1gig NOT bound" - exit 1 - fi - fi - sleep 5 - done -} -checkForPVC10GStatus(){ -PVC10G_MAX_RETRY=5 -for i in $(seq 1 $PVC10G_MAX_RETRY) ; do - PVC10GStatus=$(kubectl get pvc test-pvc-10gigs --output="jsonpath={.status.phase}") - if [ "$PVC10GStatus" == "Bound" ]; then - echo "PVC test-pvc-10gigs should NOT bound successfully due to overprovisioning restriction but got bound" - kubectl get pvc test-pvc-10gigs - exit 1 - else - echo "Waiting for few iterations to check that PVC test-pvc-10gigs does not get bound after sometime" - kubectl get pvc test-pvc-10gigs - if [ "$i" == "$PVC10G_MAX_RETRY" ] && [ "$PVC1GStatus" != "Bound" ]; then - echo "PVC test-pvc-10gigs NOT bound and hence test case passed" - ### Deleteting the 10GB PVC - kubectl delete -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/overprovisioning/pvc10g.yaml - fi - fi - sleep 5 - done -} - -echo "--------------- Create Cstor and Jiva PersistentVolume ------------------" -#kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/sample-pv-yamls/pvc-jiva-sc-1r.yaml -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/pvc-single-replica-jiva.yaml -sleep 10 -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/sample-pv-yamls/pvc-sparse-claim-cstor.yaml - -sleep 30 -echo "--------------------- List SC,PVC,PV and pods ---------------------------" -kubectl get sc,pvc,pv -kubectl get pods --all-namespaces - -kubectl get deploy -l openebs.io/controller=jiva-controller -JIVACTRL=$(kubectl get deploy -l openebs.io/controller=jiva-controller --no-headers | awk {'print $1'}) -for ctrl in `echo "${JIVACTRL[@]}" | tr "\n" " "`; -do -waitForDeployment ${ctrl} default -done - -kubectl get deploy -l openebs.io/replica=jiva-replica -JIVAREP=$(kubectl get deploy -l openebs.io/replica=jiva-replica --no-headers | awk {'print $1'}) -for rep in `echo "${JIVAREP[@]}" | tr "\n" " "`; -do -waitForDeployment ${rep} default -done - -kubectl get deploy -n openebs -l openebs.io/target=cstor-target -CSTORTARGET=$(kubectl get deploy -n openebs -l openebs.io/target=cstor-target --no-headers | awk {'print $1'}) -for target in `echo "${CSTORTARGET[@]}" | tr "\n" " "`; -do -waitForDeployment ${target} openebs -done - -echo "-------------------- Checking RO threshold limit for CSP ----------------" -cspList=( $(kubectl get csp -o jsonpath='{.items[?(@.metadata.labels.openebs\.io/storage-pool-claim=="sparse-claim-auto")].metadata.name}') ) -csp=${cspList[0]} -cspROThreshold=$(kubectl get csp -o jsonpath='{.items[?(@.metadata.labels.openebs\.io/storage-pool-claim=="sparse-claim-auto")].spec.poolSpec.roThresholdLimit}') -spcROThreshold=$(kubectl get spc sparse-claim-auto --output="jsonpath={.spec.poolSpec.roThresholdLimit}") -if [ $cspROThreshold != $spcROThreshold ]; then - echo "mismatch between SPC($spcROThreshold) and CSP($cspROThreshold) read-only threshold limit" - exit 1 -fi - - -echo "-------------- Verifying the existence of udev inside the cstor pool container--------" -cstor_pool_pods=$(kubectl get pods -n openebs -l app=cstor-pool -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$? -if [ $rc != 0 ]; then - echo "Error occured while getting the cstor pool pod names; exit code: $rc" - exit $rc -fi - -for pool_pod in $(echo "$cstor_pool_pods" | tr ":" " "); do - - echo "=======================================" - echo "Running lsblk command inside the cstor pool pod: $pool_pod to get device names" - device_list=$(kubectl exec -it -n openebs "$pool_pod" -c cstor-pool -- lsblk --noheadings --list) - echo "Device list $device_list" - - ############### lsblk --noheadings --list ####################### - ## sdb 8:16 0 10G 0 disk ## - ## sdb9 8:25 0 8M 0 part ## - ## sdb1 8:17 0 10G 0 part ## - ## sda 8:0 0 100G 0 disk ## - ## sda14 8:14 0 4M 0 part ## - ## sda15 8:15 0 106M 0 part ## - ## sda1 8:1 0 99.9G 0 part /var/openebs/sparse ## - ################################################################# - - ## Fetching device name from above output(first row and first column) - device_name=$(echo "$device_list" | grep disk | awk 'NR==1{print $1}') - - echo "Verifying whether '$device_name' is initilized by udev or not" - output=$(kubectl exec -it -n openebs "$pool_pod" -c cstor-pool -- ./var/openebs/sparse/udev_checks/udev_check "$device_name") - rc=$? - echo "$output" - - ## If exit code was not 0 then exit the process - if [ $rc != 0 ]; then - echo "Printing pool pod yaml output" - kubectl get pod "$pool_pod" -n openebs -o yaml - exit 1 - fi - echo "=======================================" - break -done - -echo "-------------------- Checking Finalizer Existence On CSP -------------------------" -## 5 retry count is good enough since the cstor-pool-mgmt container is already in Running State -retry_cnt=5 -cspList=$(kubectl get csp -o jsonpath='{.items[?(@.metadata.labels.openebs\.io/storage-pool-claim=="sparse-claim-auto")].metadata.name}') -csp=${cspList[0]} -finalizer_found=0 -for i in $(seq 1 $retry_cnt) ; do - ## Below command will give [openebs.io/pool-protection,openebs.io/storage-pool-claim]. - finalizers=$(kubectl get csp $csp -o jsonpath='{.metadata.finalizers}') - ## Below one will remove the square brackets around the output so it will be converted into - ## openebs.io/pool-protection,openebs.io/storage-pool-claim - finalizerList=$(echo "${finalizers:1:${#finalizers}-2}") - ## Iterate over all the finalizers and verify for existence of pool protection - ## finalizer. - for finalizer in $(echo "$finalizerList" | tr "," " "); do - if [ "$finalizer" == "openebs.io/pool-protection" ]; then - finalizer_found=1 - break - fi - done - if [ $finalizer_found -eq 1 ]; then - break - fi - sleep 1 -done - -if [ $finalizer_found -eq 0 ]; then - echo "Error: Finalizer: openebs.io/pool-protection not found on CSP: ${csp} finalizerList: ${finalizerList}" - exit 1 -fi -echo "---------------- Finalizer Exists On CSP -----------------------" - -echo "---------------Testing deployment in pvc namespace---------------" - -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/volume/cstor/service-account.yaml - -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/volume/cstor/sc_app_ns.yaml - -echo "---------------Creating in pvc---------------" -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/volume/cstor/pvc_app_ns.yaml - -sleep 10 - -kubectl get deploy -n openebs -l openebs.io/target=cstor-target -kubectl get cstorvolume -kubectl get service - -## To fix intermittent travis failure -sleep 20 -CSTORTARGET=$(kubectl get deploy -l openebs.io/persistent-volume-claim=openebs-pvc-in-custom-ns --no-headers | awk {'print $1'}) -echo $CSTORTARGET -waitForDeployment ${CSTORTARGET} default - -MAPI_SVC_ADDR=`kubectl get service -n openebs maya-apiserver-service -o json | grep clusterIP | awk -F\" '{print $4}'` -export MAPI_ADDR="http://${MAPI_SVC_ADDR}:5656" -export KUBERNETES_SERVICE_HOST="127.0.0.1" -export KUBECONFIG=$HOME/.kube/config - - -export MAPIPOD=$(kubectl get pods -o jsonpath='{.items[?(@.spec.containers[0].name=="maya-apiserver")].metadata.name}' -n openebs) -export CSTORVOL=$(kubectl get pv -o jsonpath='{.items[?(@.spec.claimRef.name=="cstor-vol1-1r-claim")].metadata.name}') -export CSTORVOLNS=$(kubectl get pv -o jsonpath='{.items[?(@.spec.claimRef.name=="openebs-pvc-in-custom-ns")].metadata.name}') -export JIVAVOL=$(kubectl get pv -o jsonpath='{.items[?(@.metadata.annotations.openebs\.io/cas-type=="jiva")].metadata.name}') -export POOLNAME=$(kubectl get csp -o jsonpath='{.items[?(@.metadata.labels.openebs\.io/storage-pool-claim=="sparse-claim-auto")].metadata.name}') - -echo "------------------Extracted Pod names---------------------" -echo MAPIPOD: $MAPIPOD -echo CSTORVOL: $CSTORVOL -echo CSTORVOLNS: $CSTORVOLNS -echo JIVAVOL: $JIVAVOL - -echo "++++++++++++++++ Waiting for MAYA API's to get ready ++++++++++++++++++++++" - - -printf "\n\n" -echo "---------------- Checking Volume list API -------------------" - -checkApi "curl -X GET --write-out %{http_code} --silent --output /dev/null $MAPI_ADDR/latest/volumes/" - -printf "\n\n" - -echo "---------------- Checking Volume API for jiva volume -------------------" - -checkApi "curl -X GET --write-out %{http_code} --silent --output /dev/null $MAPI_ADDR/latest/volumes/$JIVAVOL -H namespace:default" - -printf "\n\n" - -echo "---------------- Checking Volume API for cstor volume -------------------" - -checkApi "curl -X GET --write-out %{http_code} --silent --output /dev/null $MAPI_ADDR/latest/volumes/$CSTORVOL -H namespace:openebs" - -printf "\n\n" - -echo "------------ Checking Volume STATS API for cstor volume -----------------" - -checkApi "curl -X GET --write-out %{http_code} --silent --output /dev/null $MAPI_ADDR/latest/volumes/stats/$CSTORVOL -H namespace:openebs" - -printf "\n\n" - -echo "------------ Checking Volume STATS API for jiva volume -----------------" - -checkApi "curl -X GET --write-out %{http_code} --silent --output /dev/null $MAPI_ADDR/latest/volumes/stats/$JIVAVOL -H namespace:default" - -printf "\n\n" - -echo "+++++++++++++++++++++ MAYA API's are ready ++++++++++++++++++++++++++++++++" - -printf "\n\n" - - -echo "************** Snapshot and Clone related tests***************************" -# Create jiva volume for snapshot clone test ( cstor volume already exists) -#kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/pvc-single-replica-jiva.yaml - -kubectl get pods --all-namespaces -kubectl get sc - -sleep 30 - -echo "******************* Describe disks **************************" -kubectl describe disks - -echo "******************* Describe spc,sp,csp **************************" -kubectl describe spc,sp,csp - -echo "******************* List all pods **************************" -kubectl get po --all-namespaces - -echo "******************* List PVC,PV and pods **************************" -kubectl get pvc,pv - -# Create the application -echo "Creating busybox-jiva and busybox-cstor application pod" -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/jiva/busybox.yaml -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/busybox.yaml -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/busybox_ns.yaml - -for i in $(seq 1 100) ; do - phaseJiva=$(kubectl get pods busybox-jiva --output="jsonpath={.status.phase}") - phaseCstor=$(kubectl get pods busybox-cstor --output="jsonpath={.status.phase}") - phaseCstorNs=$(kubectl get pods busybox-cstor-ns --output="jsonpath={.status.phase}") - if [ "$phaseJiva" == "Running" ] && [ "$phaseCstor" == "Running" ] && [ "$phaseCstorNs" == "Running" ]; then - break - else - echo "busybox-jiva pod is in:" $phaseJiva - echo "busybox-cstor pod is in:" $phaseCstor - echo "busybox-cstor-ns pod is in:" $phaseCstorNs - - if [ "$phaseJiva" != "Running" ]; then - kubectl describe pods busybox-jiva - fi - if [ "$phaseCstor" != "Running" ]; then - kubectl describe pods busybox-cstor - fi - if [ "$phaseCstorNs" != "Running" ]; then - kubectl describe pods busybox-cstor-ns - fi - sleep 10 - fi -done - -dumpMayaAPIServerLogs 100 - -echo "********************Creating volume snapshot*****************************" -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/jiva/snapshot.yaml -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/snapshot.yaml -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/snapshot_ns.yaml -kubectl logs --tail=20 -n openebs deployment/openebs-snapshot-operator -c snapshot-controller - -# It might take some time for cstor snapshot to get created. Wait for snapshot to get created -for i in $(seq 1 100) ; do - kubectl get volumesnapshotdata - count=$(kubectl get volumesnapshotdata | wc -l) - # count should be 3 as one header line would also be present - if [ "$count" == "4" ]; then - break - else - echo "snapshot/(s) not created yet" - kubectl get volumesnapshot,volumesnapshotdata - sleep 10 - fi -done - -kubectl logs --tail=20 -n openebs deployment/openebs-snapshot-operator -c snapshot-controller - -# Promote/restore snapshot as persistent volume -sleep 30 -echo "*****************Promoting snapshot as new PVC***************************" -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/jiva/snapshot_claim.yaml -kubectl logs --tail=20 -n openebs deployment/openebs-snapshot-operator -c snapshot-provisioner -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/snapshot_claim.yaml -kubectl logs --tail=20 -n openebs deployment/openebs-snapshot-operator -c snapshot-provisioner -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/snapshot_claim_ns.yaml -kubectl logs --tail=20 -n openebs deployment/openebs-snapshot-operator -c snapshot-provisioner - -sleep 30 -# get clone replica pod IP to make a curl request to get the clone status -cloned_replica_ip=$(kubectl get pods -owide -l openebs.io/persistent-volume-claim=demo-snap-vol-claim-jiva --no-headers | grep -v ctrl | awk {'print $6'}) -echo "***************** checking clone status *********************************" -for i in $(seq 1 5) ; do - clonestatus=`curl http://$cloned_replica_ip:9502/v1/replicas/1 | jq '.clonestatus' | tr -d '"'` - if [ "$clonestatus" == "completed" ]; then - break - else - echo "Clone process in not completed ${clonestatus}" - sleep 60 - fi -done - -# Clone is in Alpha state, and kind of flaky sometimes, comment this integration test below for time being, -# util its stable in backend storage engine -echo "***************Creating busybox-clone-jiva application pod********************" -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/jiva/busybox_clone.yaml -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/busybox_clone.yaml -kubectl create -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/ci/maya/snapshot/cstor/busybox_clone_ns.yaml - - -kubectl get pods --all-namespaces -kubectl get pvc --all-namespaces - -for i in $(seq 1 15) ; do - phaseJiva=$(kubectl get pods busybox-clone-jiva --output="jsonpath={.status.phase}") - phaseCstor=$(kubectl get pods busybox-clone-cstor --output="jsonpath={.status.phase}") - phaseCstorNs=$(kubectl get pods busybox-clone-cstor-ns --output="jsonpath={.status.phase}") - if [ "$phaseJiva" == "Running" ] && [ "$phaseCstor" == "Running" ] && [ "$phaseCstorNs" == "Running" ]; then - break - else - echo "busybox-clone-jiva pod is in:" $phaseJiva - echo "busybox-clone-cstor pod is in:" $phaseCstor - echo "busybox-clone-cstor-ns pod is in:" $phaseCstorNs - - if [ "$phaseJiva" != "Running" ]; then - kubectl describe pods busybox-clone-jiva - fi - if [ "$phaseCstor" != "Running" ]; then - kubectl describe pods busybox-clone-cstor - fi - if [ "$phaseCstorNs" != "Running" ]; then - kubectl describe pods busybox-clone-cstor-ns - fi - sleep 30 - fi -done - - -echo "********************** cvr status *************************" -kubectl get cvr -n openebs -o yaml - -dumpMayaAPIServerLogs 100 - -kubectl get pods -kubectl get pvc - -echo "*************Verifying data validity and Md5Sum Check********************" -hashjiva1=$(kubectl exec busybox-jiva -- md5sum /mnt/store1/date.txt | awk '{print $1}') -hashjiva2=$(kubectl exec busybox-clone-jiva -- md5sum /mnt/store2/date.txt | awk '{print $1}') - -hashcstor1=$(kubectl exec busybox-cstor -- md5sum /mnt/store1/date.txt | awk '{print $1}') -hashcstor2=$(kubectl exec busybox-clone-cstor -- md5sum /mnt/store2/date.txt | awk '{print $1}') - -hashcstorns1=$(kubectl exec busybox-cstor-ns -- md5sum /mnt/store1/date.txt | awk '{print $1}') -hashcstorns2=$(kubectl exec busybox-clone-cstor-ns -- md5sum /mnt/store2/date.txt | awk '{print $1}') - -echo "busybox jiva hash: $hashjiva1" -echo "busybox-clone-jiva hash: $hashjiva2" -echo "busybox cstor hash: $hashcstor1" -echo "busybox-clone-cstor hash: $hashcstor2" -echo "busybox cstor ns hash: $hashcstorns1" -echo "busybox-clone-cstor-ns hash: $hashcstorns2" - -if [ "$hashjiva1" != "" ] && [ "$hashcstor1" != "" ] && [ "$hashjiva1" == "$hashjiva2" ] && [ "$hashcstor1" == "$hashcstor2" ] && [ "$hashcstorns1" == "$hashcstorns2" ]; then - echo "Md5Sum Check: PASSED" -else - echo "Md5Sum Check: FAILED"; exit 1 -fi - -testPoolReadOnly() { - for i in 1 2 3 ; do - kubectl exec -it busybox-cstor -- sh -c "dd if=/dev/urandom of=/mnt/store1/$RANDOM count=10000 bs=4k && sync" - done - kubectl get csp - - # update csp readonly threshold to 1% - kubectl patch csp ${csp} --type='json' -p='[{"op":"replace", "path":"/spec/poolSpec/roThresholdLimit", "value":1}]' - # default sync period for csp is 30 second - sleep 60 - - readOnly=$(kubectl get csp ${csp} -o jsonpath='{.status.readOnly}') - if [ $readOnly == "false" ]; then - echo "CSP should be readonly" - exit 2 - fi - - cspPod=`kubectl get pods -o jsonpath="{.items[?(@.metadata.labels.openebs\.io/cstor-pool=='$csp')].metadata.name}" -n openebs` - readOnly=$(kubectl exec -it ${cspPod} -n openebs -ccstor-pool -- zpool get io.openebs:readonly -Hp -ovalue) - if [ $readOnly == "off" ]; then - echo "Pool should be readonly" - exit 2 - fi - - # update csp readonly threshold to 90% - kubectl patch csp ${csp} --type='json' -p='[{"op":"replace", "path":"/spec/poolSpec/roThresholdLimit", "value":90}]' - # default sync period for csp is 30 second - sleep 60 - - readOnly=$(kubectl get csp ${csp} -o jsonpath='{.status.readOnly}') - if [ $readOnly == "true" ]; then - echo "CSP should not be readonly" - exit 2 - fi - - readOnly=$(kubectl exec -it ${cspPod} -n openebs -ccstor-pool -- zpool get io.openebs:readonly -Hp -ovalue) - if [ $readOnly == "on" ]; then - echo "Pool should not be readonly" - exit 2 - fi -} -# check pool read threshold limit -testPoolReadOnly - -## NOTE: Pass arguments to this function with "" -## verify_snapshot_list_on_cvr "" "" "" "" -function verify_snapshot_list_on_cvr() { - cvr_name=$1 - cvr_namespace=$2 - desired_snapshot_count=$3 - desired_snapshot_list=$4 - is_snapshot_count_matched=false - - ### Trying for 90 seconds which means max of 3 updates can happen because default RESYNC_INTERVAL is 30 seconds - retry_cnt=18 - for i in $(seq 1 $retry_cnt) ; do - ## Below Command is used to get the only snapshot names using jq - ## output will be istgt_snap1 istgt_snap2 istgt_snap3 - got_snapshot_list=$(kubectl get cvr -n ${cvr_namespace} ${cvr_name} -o json | jq -r '.status.snapshots | keys[] as $k| "\($k)"') - got_snapshot_count=$(echo ${got_snapshot_list} | wc -w) - if [ $got_snapshot_count -eq $desired_snapshot_count ]; then - is_snapshot_count_matched=true - break - fi - - echo "Waiting for snapshots to exists on CVR: ${cvr_name} expected snapshot count: ${desired_snapshot_count} got snapshot count: ${got_snapshot_count}" - sleep 5 - done - - ## Verify snapshot count - if [ "$is_snapshot_count_matched" == false ]; then - echo "Snapshot list was not updated on CVR: ${cvr_name} expected snapshot count: ${desired_snapshot_count} current snapshot count: ${got_snapshot_count}" - exit 1 - fi - - ## Verify Snapshot names - for snap_name in `echo ${got_snapshot_list}`; do - local is_snap_exist=false - for desired_snap_name in `echo ${desired_snapshot_list}`; do - if [ ${snap_name} == ${desired_snap_name} ]; then - is_snap_exist=true - break - fi - done - if [ "$is_snap_exist" == false ]; then - echo "Snapshot $snap_name exist in CVR ${cvr_name} but doesn't exist in desired snapshot list: ${desired_snapshot_list}" - exit 1 - fi - done -} - -## retry_command_execution will execute the command -function retry_command_execution() { - command=$1 - retry_count=5 - success=0 - - ## Retrying 5 times to execute the command is good enough - for i in $(seq 1 $retry_count) ; do - $command - if [ $? == 0 ]; then - success=1 - break - fi - sleep 5 - done - - if [ $success == 0 ]; then - echo "Failed to execute the command $command" - exit 1 - fi - echo "Command $command executed successfully" -} - - -echo "===========Testing Snapshots On CVR By Enabling Feature Gate On CStor Pools =============" -## Get the deployment name of CSP -pool_dep_list=( $(kubectl get deployment -l app=cstor-pool -o jsonpath='{.items[?(@.metadata.labels.openebs\.io/storage-pool-claim=="sparse-claim-auto")].metadata.name}' -n openebs)) -pool_dep=${pool_dep_list[0]} - -## Enable the feature gates by patching the deployment with corresponding feature gates -## NOTE: If deployment already patched then exit code will be 0 -kubectl patch deployment --namespace openebs ${pool_dep} --patch='{"spec": {"template": {"spec": {"containers": [{"name": "cstor-pool-mgmt","env": [{"name": "REBUILD_ESTIMATES", "value": "true"}]}]}}}}' -if [ $? != 0 ]; then - echo "Failed to patch ${pool_dep} deployment to enable REBUILD_ESTIMATE feature gates" - exit 1 -fi - -## If Deployment patched checking the rollout status -rollout_status=$(kubectl rollout status --namespace openebs deployment/$pool_dep) -rc=$?; if [[ ($rc -ne 0) || ! (${rollout_status} =~ "successfully rolled out") ]]; - then echo "ERROR: Failed to rollout status for $pool_dep error: $rc"; exit; fi - -## As part of the test we already created snapshot for Volume here we are fetching volumesnapshotdata name from existing snapshot -volume_snapshot_data_name=$(kubectl get volumesnapshot snapshot-demo-cstor -ojsonpath='{.spec.snapshotDataName}') -if [ $? != 0 ]; then - echo "Failed to get volumesnapshotdata name for volumesnapshot: ${volumeSnapshotDataName}" - exit 1 -fi - -## Get Snapshot name from volume snapshot data -k8s_snapshot_name=$(kubectl get volumesnapshotdata ${volume_snapshot_data_name} -ojsonpath='{.spec.openebsVolume.snapshotId}') -if [ $? != 0 ]; then - echo "Failed to get snapshot name for volumesnapshot data: ${volume_snapshot_data_name}" - exit 1 -fi - -pv_name=$(kubectl get pvc cstor-vol1-1r-claim -o jsonpath='{.spec.volumeName}') -if [ $? != 0 ]; then - echo "Failed to get PV name for PVC: cstor-vol1-1r-claim" - exit 1 -fi - -cvr_list=$(kubectl get cvr -n openebs -l openebs.io/persistent-volume=${pv_name} -o jsonpath='{.items[*].metadata.name}') -if [ $? != 0 ]; then - echo "Failed to list CVRs of PV: ${pv_name}" - exit 1 -fi -cvr_name=${cvr_list[0]} - -verify_snapshot_list_on_cvr "${cvr_name}" "openebs" "1" "${k8s_snapshot_name}" - -cstor_target_pod_list=$(kubectl get pod -n openebs -l openebs.io/persistent-volume=${pv_name},openebs.io/target=cstor-target -o jsonpath='{.items[*].metadata.name}') -if [ $? != 0 ]; then - echo "Failed to list cStor target pods of PV: ${pv_name}" - exit 1 -fi -cstor_target_pod_name=${cstor_target_pod_list[0]} - -snapshot_command=$(echo "kubectl exec -n openebs ${cstor_target_pod_name} -c cstor-istgt -- istgtcontrol snapcreate ${pv_name} istgt_snap1") - -retry_command_execution "$snapshot_command" - -verify_snapshot_list_on_cvr "${cvr_name}" "openebs" "2" "${k8s_snapshot_name} istgt_snap1" - -snapshot_command=$(echo "kubectl exec -n openebs ${cstor_target_pod_name} -c cstor-istgt -- istgtcontrol snapdestroy ${pv_name} istgt_snap1") - -retry_command_execution "$snapshot_command" - -verify_snapshot_list_on_cvr "${cvr_name}" "openebs" "1" "${k8s_snapshot_name}" - -echo "===========Testing Snapshots On CVR By Enabling Feature Gate On CStor Pools Is Done Successfully =============" - -## Running OverProvisioning after all the tests -runVolumeOverProvisioningTest - diff --git a/k8s/demo/busybox/cstor-disk-pod.yaml b/k8s/demo/busybox/cstor-disk-pod.yaml deleted file mode 100644 index 010448eddd..0000000000 --- a/k8s/demo/busybox/cstor-disk-pod.yaml +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright © 2019 The OpenEBS Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# -# Example for launching a busybox pod using cstor pool. -# -# The cstor pool and storage class used in this example can -# be setup using YAML present in: -# https://github.com/openebs/openebs/blob/master/k8s/sample-pv-yamls/spc-cstor-disk-type.yaml -# -# Prior to running this YAML, verify that the storage class -# mentioned below and the corresponding pool is available. -# -# You can use the below commands to verify. -# -# `kubectl describe sc openebs-cstor-disk` -# -# Note the StoragePoolClaim in the above output. Say it is cstor-disk -# `kubectl describe spc cstor-disk` -# -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: pvc-cd -spec: - accessModes: - - ReadWriteOnce - storageClassName: openebs-cstor-disk - resources: - requests: - storage: 2Gi ---- -apiVersion: v1 -kind: Pod -metadata: - name: busybox-cd - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'date > /mnt/store1/date.txt; hostname >> /mnt/store1/hostname.txt; sync; sleep 5; sync; tail -f /dev/null;' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: pvc-cd ---- diff --git a/k8s/demo/busybox/jiva-default-pod.yaml b/k8s/demo/busybox/jiva-default-pod.yaml deleted file mode 100644 index 69883b65d5..0000000000 --- a/k8s/demo/busybox/jiva-default-pod.yaml +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright © 2019 The OpenEBS Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# -# Example for launching a busybox pod using jiva. -# -# This example uses the default jiva storage class installed -# by OpenEBS. -# -# Prior to running this YAML, verify that the storage class -# mentioned below is available. -# -# You can use the below commands to verify. -# -# `kubectl describe sc openebs-jiva-default` -# -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: pvc-jd -spec: - accessModes: - - ReadWriteOnce - storageClassName: openebs-jiva-default - resources: - requests: - storage: 2Gi ---- -apiVersion: v1 -kind: Pod -metadata: - name: busybox-jd - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'date > /mnt/store1/date.txt; hostname >> /mnt/store1/hostname.txt; sync; sleep 5; sync; tail -f /dev/null;' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: pvc-jd ---- diff --git a/k8s/demo/busybox/localpv-device-pod.yaml b/k8s/demo/busybox/localpv-device-pod.yaml deleted file mode 100644 index 6c748d909f..0000000000 --- a/k8s/demo/busybox/localpv-device-pod.yaml +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright © 2019 The OpenEBS Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# -# Example for launching a busybox pod using localpv with device. -# -# This example uses the default localpv with hostdevice -# storage class installed by OpenEBS. -# -# Prior to running this YAML, verify that: -# - the storage class mentioned below is available. -# - storage devices matching the claim is attached to the node. -# -# You can use the below commands to verify. -# -# `kubectl describe sc openebs-device` -# -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: pvc-hd -spec: - accessModes: - - ReadWriteOnce - storageClassName: openebs-device - resources: - requests: - storage: 2Gi ---- -apiVersion: v1 -kind: Pod -metadata: - name: busybox-hd - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'date > /mnt/store1/date.txt; hostname >> /mnt/store1/hostname.txt; sync; sleep 5; sync; tail -f /dev/null;' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: pvc-hd ---- diff --git a/k8s/demo/busybox/localpv-hostpath-pod.yaml b/k8s/demo/busybox/localpv-hostpath-pod.yaml deleted file mode 100644 index c2e76e9106..0000000000 --- a/k8s/demo/busybox/localpv-hostpath-pod.yaml +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright © 2019 The OpenEBS Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# -# Example for launching a busybox pod using localpv with hostpath. -# -# This example uses the default localpv with hostpath -# storage class installed by OpenEBS. -# -# Prior to running this YAML, verify that the storage class -# mentioned below is available. -# -# You can use the below commands to verify. -# -# `kubectl describe sc openebs-hostpath` -# -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: pvc-hp -spec: - accessModes: - - ReadWriteOnce - storageClassName: openebs-hostpath - resources: - requests: - storage: 2Gi ---- -apiVersion: v1 -kind: Pod -metadata: - name: busybox-hp - namespace: default -spec: - containers: - - command: - - sh - - -c - - 'date > /mnt/store1/date.txt; hostname >> /mnt/store1/hostname.txt; sync; sleep 5; sync; tail -f /dev/null;' - image: busybox - imagePullPolicy: Always - name: busybox - volumeMounts: - - mountPath: /mnt/store1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: pvc-hp ---- diff --git a/k8s/demo/cassandra/README.md b/k8s/demo/cassandra/README.md deleted file mode 100644 index ff023348bb..0000000000 --- a/k8s/demo/cassandra/README.md +++ /dev/null @@ -1,204 +0,0 @@ -# Running Cassandra with OpenEBS - -This tutorial provides detailed instructions to run a Kudo operator based Cassandra StatefulsSets with OpenEBS storage and perform some simple database operations to verify the successful deployment and it's performance benchmark. - -## Introduction - -Apache Cassandra is a free and open-source distributed NoSQL database management system designed to handle a large amounts of data across nodes, providing high availability with no single point of failure. It uses asynchronous masterless replication allowing low latency operations for all clients. - -OpenEBS is the most popular Open Source Container Attached Solution available for Kubernetes and is favored by many organizations for its simplicity and ease of management and it's highly flexible deployment options to meet the storage needs of any given stateful application. - -Depending on the performance and high availability requirements of Cassandra, you can select to run Cassandra with the following deployment options: - -For optimal performance, deploy Cassandra with OpenEBS Local PV. If you would like to use storage layer capabilities like high availability, snapshots, incremental backups and restore and so forth, you can select OpenEBS cStor. - -Whether you use OpenEBS Local PV or cStor, you can set up the Kubernetes cluster with all its nodes in a single availability zone/data center or spread across multiple zones/ data centers. - - -## Configuration workflow - -1. Install OpenEBS -2. Select OpenEBS storage engine -3. Configure OpenEBS LocalPV StorageClass -4. Install Kudo operator -5. Install Kudo based Cassandra -6. Verify Cassandra is up and running -7. Testing Cassandra performance on OpenEBS - -### Install OpenEBS - -If OpenEBS is not installed in your K8s cluster, this can be done from [here](https://docs.openebs.io/docs/next/overview.html). If OpenEBS is already installed, go to the next step. - -### Select OpenEBS storage engine - -A storage engine is the data plane component of the IO path of a Persistent Volume. In CAS architecture, users can choose different data planes for different application workloads based on a configuration policy. OpenEBS provides different types of storage engines and chooses the right engine that suits your type of application requirements and storage available on your Kubernetes nodes. More information can be read from [here](https://docs.openebs.io/docs/next/overview.html#openebs-storage-engines). - -### Configure OpenEBS LocalPV StorageClass - -In this tutorial, OpenEBS LocalPV device has been used as the storage engine for deploying Kudo Cassandra. There are 2 ways to use OpenEBS LocalPV. - -- `openebs-hostpath` - Using this option, it will create Kubernetes Persistent Volumes that will store the data into OS host path directory at: /var/openebs//. Select this option, if you don’t have any additional block devices attached to Kubernetes nodes. You would like to customize the directory where data will be saved, create a new OpenEBS LocalPV storage class using these [instructions](https://docs.openebs.io/docs/next/uglocalpv-hostpath.html#create-storageclass). - -- `openebs-device` - Using this option, it will create Kubernetes Local PVs using the block devices attached to the node. Select this option when you want to dedicate a complete block device on a node to a Cassandra node. You can customize which devices will be discovered and managed by OpenEBS using the instructions [here](https://docs.openebs.io/docs/next/ugndm.html). - -### Install Kudo operator to install Cassandra - -- Make the environment to install Kudo operator using the following steps. - - ``` - $ export GOROOT=/usr/local/go - $ export GOPATH=$HOME/gopath - $ export PATH=$GOPATH/bin:$GOROOT/bin:$PATH - ``` -- Choose the Kudo version. The latest version can be found [here](https://github.com/kudobuilder/kudo/releases). In the following command, selected Kudo version is v0.14.0. - ``` - VERSION=0.14.0 - OS=$(uname | tr '[:upper:]' '[:lower:]') - ARCH=$(uname -m) - wget -O kubectl-kudo https://github.com/kudobuilder/kudo/releases/download/v${VERSION}/kubectl-kudo_${VERSION}_${OS}_${ARCH} - ``` -- Change the permission - ``` - $ chmod +x kubectl-kudo - $ sudo mv kubectl-kudo /usr/local/bin/kubectl-kudo - ``` -- Install Cert-manager - - Before installing the KUDO operator, the cert-manager must be already installed in your cluster. If not, install the cert-manager. The instruction can be found from [here](https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests). Since our K8s version is v1.16.0, we have installed cert-manager using the following command. - ``` - $ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml - ``` -- Install Kudo operator using a specified version. In the following command, the selected version is v0.14.0. - ``` - $ kubectl-kudo init --version 0.14.0 - ``` - Verify Kudo controller pods status - ``` - $ kubectl get pod -n kudo-system - - NAME READY STATUS RESTARTS AGE - kudo-controller-manager-0 1/1 Running 0 2m40s - ``` - -### Install Kudo operator based Cassandra - -Install Kudo based Cassandra using OpenEBS storage engine. In this example, the storage class used is `openebs-device`. Before deploying Cassandra, ensure that there are enough block devices that can be used to consume Cassandra application, by running `kubectl get bd -n openebs`. - -``` -$ export instance_name=cassandra-openebs -$ export namespace_name=cassandra -$ kubectl create ns cassandra -$ kubectl kudo install cassandra --namespace=$namespace_name --instance $instance_name -p NODE_STORAGE_CLASS=openebs-device -``` - -### Verify Cassandra is up and running - -- Get the Cassandra Pods, StatefulSet, Service and PVC details. It should show that StatefulSet is deployed with 3 Cassandra pods in running state and a headless service is configured. - ``` - $kubectl get pod,service,sts,pvc -n cassandra - - NAME READY STATUS RESTARTS AGE - cassandra-openebs-node-0 2/2 Running 0 4m - cassandra-openebs-node-1 2/2 Running 0 3m2s - cassandra-openebs-node-2 2/2 Running 0 3m24s - - NAME READY AGE - statefulset.apps/cassandra 3/3 6m35s - - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - service/cassandra-openebs-svc ClusterIP None 7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP 6m35s - - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - var-lib-cassandra-cassandra-openebs-node-0 Bound pvc-213f2cfb-231f-4f14-be93-69c3d1c6d5d7 20Gi RWO openebs-device 20m - var-lib-cassandra-cassandra-openebs-node-1 Bound pvc-059bf24b-3546-43f3-aa01-3a6bea640ffd 20Gi RWO openebs-device 19m - var-lib-cassandra-cassandra-openebs-node-2 Bound pvc-82367756-7a19-4f7f-9e35-65e7696f3b86 20Gi RWO openebs-device 18m - ``` -- Login to one of the Cassandra pod to verify the Cassandra cluster health status using the following command. - ``` - $ kubectl exec -it cassandra-openebs-node-0 bash -n cassandra - - cassandra@cassandra-openebs-node-0:/$ nodetool status - Datacenter: datacenter1 - ======================= - Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack - UN 192.168.30.24 94.21 KiB 256 63.0% 73c54856-f045-48db-b0db-e6a751d005f8 rack1 - UN 192.168.93.31 75.12 KiB 256 65.3% d48c61b7-551b-4805-b8cc-b915d039f298 rack1 - UN 192.168.56.80 75 KiB 256 71.7% 91fc4107-e447-4605-8cbf-3916f9fd8abf rack1 - ``` - -- Create a Test Keyspace with Tables. Login to one of the Cassandra pod and run the following commands from a cassandra pod. - ``` - cassandra@cassandra-openebs-node-0:/$ cqlsh ..svc.cluster.local - ``` - Example command: - ``` - cassandra@cassandra-openebs-node-0:/$ cqlsh cassandra-openebs-svc.cassandra.svc.cluster.local - - Connected to cassandra-openebs at cassandra-openebs-svc.cassandra.svc.cluster.local:9042. - [cqlsh 5.0.1 | Cassandra 3.11.6 | CQL spec 3.4.4 | Native protocol v4] - Use HELP for help. - cqlsh> - ``` - -- Creating a Keyspace. Now, let’s create a Keyspace and add a table with some entries into it. - ``` - cqlsh> create keyspace dev - ... with replication = {'class':'SimpleStrategy','replication_factor':1}; - -- Creating Data Objects - ``` - cqlsh> use dev; - cqlsh:dev> create table emp (empid int primary key, - ... emp_first varchar, emp_last varchar, emp_dept varchar); - -- Inserting and Querying Data - ``` - $ cqlsh:dev> insert into emp (empid, emp_first, emp_last, emp_dept) - ... values (1,'fred','smith','eng'); - - $ cqlsh:dev> select * from emp; - empid | emp_dept | emp_first | emp_last - -------+----------+-----------+---------- - 1 | eng | fred | smith - (1 rows) - -- Updating a data - ``` - $ cqlsh:dev> update emp set emp_dept = 'fin' where empid = 1; - $ cqlsh:dev> select * from emp; - empid | emp_dept | emp_first | emp_last - -------+----------+-----------+---------- - 1 | fin | fred | smith - (1 rows) - cqlsh:dev> exit - ``` - -### Testing Cassandra Performance on OpenEBS - -- Login to one of the cassandra pod and run the following sample loadgen command to write and read some entry to and from the database. - ``` - $ kubectl exec -it cassandra-openebs-node-0 bash -n cassandra - ``` -- Get the database health status - ``` - $ nodetool status - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack - UN 192.168.52.94 135.39 MiB 256 32.6% 68206664-b1e7-4e73-9677-14119536e42d rack1 - UN 192.168.7.79 189.98 MiB 256 36.3% 5f6176f5-c47f-4d12-bd16-c9427baf68a0 rack1 - UN 192.168.70.87 127.46 MiB 256 31.2% da31ba66-42dd-4c85-a212-a0cb828bbefb rack1 - ``` -- Go to the directory where the binary is located. - ``` - cassandra@cassandra-openebs-node-0:/$ cd /opt/cassandra/tools/bin - ``` -- Run Write load - ``` - cassandra@cassandra-openebs-node-0:/opt/cassandra/tools/bin$ ./cassandra-stress write n=1000000 -rate threads=50 -node 192.168.52.94 - ``` -- Run Read Load - ``` - cassandra@cassandra-openebs-node-0:/opt/cassandra/tools/bin$ ./cassandra-stress read n=200000 -rate threads=50 -node 192.168.52.94 - ``` diff --git a/k8s/demo/cockroachDB/README.md b/k8s/demo/cockroachDB/README.md deleted file mode 100644 index 70c759350c..0000000000 --- a/k8s/demo/cockroachDB/README.md +++ /dev/null @@ -1,219 +0,0 @@ -# CockroachDB - -This document demonstrates the deployment of CockroachDB as a StatefulSet in a Kubernetes cluster. The user will be able to spawn a CockroachDB StatefulSet that will use OpenEBS as its persistent storage. - -## Deploy as a StatefulSet - -Deploying CockroachDB as a StatefulSet provides the following benefits: - -- Stable unique network identifiers. -- Stable persistent storage. -- Ordered graceful deployment and scaling. -- Ordered graceful deletion and termination. - -## Deploying CockroachDB with Persistent Storage - -Before starting check the status of the cluster: - -```bash -ubuntu@kubemaster:~kubectl get nodes -NAME STATUS AGE VERSION -kubemaster Ready 3d v1.8.2 -kubeminion-01 Ready 3d v1.8.2 -kubeminion-02 Ready 3d v1.8.2 - -``` - -Download and apply the CockroachDB YAMLs from the OpenEBS repository: - -```bash - -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/cockroachDB/cockroachdb-sc.yaml -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/cockroachDB/cockroachdb-sts.yaml -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/cockroachDB/cockroachdb-svc.yaml - -ubuntu@kubemaster:~kubectl apply -f cockroachdb-sc.yaml -ubuntu@kubemaster:~kubectl apply -f cockroachdb-sts.yml -ubuntu@kubemaster:~kubectl apply -f cockroachdb-svc.yml -``` - -Get the status of running pods: - -```bash -ubuntu@kubemaster:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -cockroachdb-0 1/1 Running 0 22h -cockroachdb-1 1/1 Running 0 21h -cockroachdb-2 1/1 Running 0 21h -maya-apiserver-5f744bdcbc-q5lbb 1/1 Running 0 22h -openebs-provisioner-6fd9458d96-spvws 1/1 Running 0 22h -pvc-42e9cafc-d4d7-11e7-8d7b-000c29119159-ctrl-6c8654f6f9-4m7jb 2/2 Running 0 21h -pvc-42e9cafc-d4d7-11e7-8d7b-000c29119159-rep-7c89c65dd4-p4l6w 1/1 Running 0 21h -pvc-42e9cafc-d4d7-11e7-8d7b-000c29119159-rep-7c89c65dd4-wmwl2 1/1 Running 0 21h -pvc-7005a715-d4d7-11e7-8d7b-000c29119159-ctrl-7944b78f8f-r575t 2/2 Running 0 21h -pvc-7005a715-d4d7-11e7-8d7b-000c29119159-rep-84746c8dbf-glrhq 1/1 Running 0 21h -pvc-7005a715-d4d7-11e7-8d7b-000c29119159-rep-84746c8dbf-l6zlr 1/1 Running 0 21h -pvc-ef78ba18-d4d6-11e7-8d7b-000c29119159-ctrl-78f6c95f87-w8tgq 2/2 Running 0 22h -pvc-ef78ba18-d4d6-11e7-8d7b-000c29119159-rep-649d9fd578-rxthz 1/1 Running 0 22h -pvc-ef78ba18-d4d6-11e7-8d7b-000c29119159-rep-649d9fd578-wp6xc 1/1 Running 0 22h - -``` - -Get the status of running StatefulSet: - -```bash -ubuntu@kubemaster:~$ kubectl get statefulset -NAME DESIRED CURRENT AGE -cockroachdb 3 3 22h - -``` - -Get the status of underlying persistent volume used by CockroachDB StatefulSet: - -```bash -ubuntu@kubemaster:~$ kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -datadir-cockroachdb-0 Bound pvc-ef78ba18-d4d6-11e7-8d7b-000c29119159 10Gi RWO openebs-cockroachdb 22h -datadir-cockroachdb-1 Bound pvc-42e9cafc-d4d7-11e7-8d7b-000c29119159 10Gi RWO openebs-cockroachdb 22h -datadir-cockroachdb-2 Bound pvc-7005a715-d4d7-11e7-8d7b-000c29119159 10Gi RWO openebs-cockroachdb 22h - -``` - -Get the status of the services: - -```bash -ubuntu@kubemaster:~kubectl get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -cockroachdb ClusterIP None 26257/TCP,8080/TCP 22h -cockroachdb-public ClusterIP 10.98.208.2 26257/TCP,8080/TCP 22h -kubernetes ClusterIP 10.96.0.1 443/TCP 20d -maya-apiserver-service ClusterIP 10.98.148.4 5656/TCP 22h -pvc-42e9cafc-d4d7-11e7-8d7b-000c29119159-ctrl-svc ClusterIP 10.96.109.197 3260/TCP,9501/TCP 22h -pvc-7005a715-d4d7-11e7-8d7b-000c29119159-ctrl-svc ClusterIP 10.105.222.30 3260/TCP,9501/TCP 22h -pvc-ef78ba18-d4d6-11e7-8d7b-000c29119159-ctrl-svc ClusterIP 10.110.107.240 3260/TCP,9501/TCP 22h - -``` - -## Testing your Database - -### Using the built-in SQL Client - -1. Launch a temporary interactive pod and start the built-in SQL client inside it: - -```bash -ubuntu@kubemaster:~kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never -- sql --insecure --host=cockroachdb-public -``` - -2. Run some basic CockroachDB SQL statements: - -```sql -> CREATE DATABASE bank; - -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - -> INSERT INTO bank.accounts VALUES (1, 1000.50); - -> SELECT * FROM bank.accounts; - -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) - -``` - -3. Exit the SQL shell: - -```sql ->\q -``` - -### Using a Load Generator - -1. Download and apply the CockroachDB load generator from the OpenEBS repository: - -```bash - -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/cockroachDB/cockroachdb-lg.yaml - -ubuntu@kubemaster:~kubectl apply -f cockroachdb-lg.yaml -``` - -2. Get the status of the job. - -```bash -ubuntu@kubemaster:~kubectl get jobs -NAME DESIRED SUCCESSFUL AGE -cockroachdb-lg 1 0 2m -``` - -3. This is a Kubernetes Job YAML which creates a database called _test_ with a table called _kv_ containing random k:v pairs. - -4. The Kubernetes Job will run for a duration of 5 minutes, which is a configurable value in the YAML. - -5. Launch a temporary interactive pod and start the built-in SQL client inside it: - -```bash -ubuntu@kubemaster:~kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never -- sql --insecure --host=cockroachdb-public -``` - -6. Set the default database as _test_ and display the contents of the _kv_ table. - -```sql -> SHOW DATABASES; -+--------------------+ -| Database | -+--------------------+ -| crdb_internal | -| information_schema | -| pg_catalog | -| system | -| test | -+--------------------+ -(5 rows) - -Time: 7.084556ms - -> SET DATABASE=test; -SET - -Time: 6.169867ms - -test> SELECT * FROM test.kv LIMIT 10; -+----------------------+--------+ -| k | v | -+----------------------+--------+ -| -9223282596810038725 | "\x85" | -| -9223116438301212725 | "\xb4" | -| -9222613679950113217 | * | -| -9222209701222264670 | G | -| -9222188216226059435 | j | -| -9221992469291086418 | y | -| -9221747069894991943 | "\x82" | -| -9221352569080615127 | "\x1e" | -| -9221294188251221564 | "\xe3" | -| -9220587135773113226 | "\x94" | -+----------------------+--------+ -(10 rows) - -Time: 98.004199ms - -test> SELECT COUNT(*) FROM test.kv; -+----------+ -| count(*) | -+----------+ -| 59814 | -+----------+ -(1 row) - -Time: 438.68592ms - -``` - -7. Exit the SQL shell: - -```sql ->\q -``` \ No newline at end of file diff --git a/k8s/demo/cockroachDB/cockroachdb-lg.yaml b/k8s/demo/cockroachDB/cockroachdb-lg.yaml deleted file mode 100644 index b451381ca4..0000000000 --- a/k8s/demo/cockroachDB/cockroachdb-lg.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: batch/v1 -kind: Job -metadata: - name: cockroachdb-lg -spec: - template: - metadata: - labels: - app: cockroachdb-lg - spec: - restartPolicy: Never - containers: - - name: cockroachdb-lg - image: cockroachdb/loadgen-kv:0.1 - imagePullPolicy: IfNotPresent - command: - - "/kv" - - "--duration" - - "5m" - - "postgres://root@cockroachdb-public:26257/kv?sslmode=disable" diff --git a/k8s/demo/cockroachDB/cockroachdb-sc.yaml b/k8s/demo/cockroachDB/cockroachdb-sc.yaml deleted file mode 100644 index c1838b4eb9..0000000000 --- a/k8s/demo/cockroachDB/cockroachdb-sc.yaml +++ /dev/null @@ -1,12 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cockroachdb -provisioner: openebs.io/provisioner-iscsi -parameters: - openebs.io/storage-pool: "default" - openebs.io/jiva-replica-count: "2" - openebs.io/volume-monitor: "true" - openebs.io/capacity: 5G ---- diff --git a/k8s/demo/cockroachDB/cockroachdb-sts.yaml b/k8s/demo/cockroachDB/cockroachdb-sts.yaml deleted file mode 100644 index 8e5cc0b4b3..0000000000 --- a/k8s/demo/cockroachDB/cockroachdb-sts.yaml +++ /dev/null @@ -1,121 +0,0 @@ - ---- -apiVersion: policy/v1beta1 -kind: PodDisruptionBudget -metadata: - name: cockroachdb-budget - labels: - app: cockroachdb -spec: - selector: - matchLabels: - app: cockroachdb - minAvailable: 67% ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: cockroachdb -spec: - serviceName: "cockroachdb" - replicas: 3 - selector: - matchLabels: - app: cockroachdb - template: - metadata: - labels: - app: cockroachdb - spec: - # Init containers are run only once in the lifetime of a pod, before - # it's started up for the first time. It has to exit successfully - # before the pod's main containers are allowed to start. - # This particular init container does a DNS lookup for other pods in - # the set to help determine whether or not a cluster already exists. - # If any other pods exist, it creates a file in the cockroach-data - # directory to pass that information along to the primary container that - # has to decide what command-line flags to use when starting CockroachDB. - # This only matters when a pod's persistent volume is empty - if it has - # data from a previous execution, that data will always be used. - # - # If your Kubernetes cluster uses a custom DNS domain, you will have - # to add an additional arg to this pod: "-domain=" - initContainers: - - name: bootstrap - image: cockroachdb/cockroach-k8s-init:0.2 - imagePullPolicy: IfNotPresent - args: - - "-on-start=/on-start.sh" - - "-service=cockroachdb" - env: - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumeMounts: - - name: datadir - mountPath: /cockroach/cockroach-data - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - cockroachdb - topologyKey: kubernetes.io/hostname - containers: - - name: cockroachdb - image: cockroachdb/cockroach:v1.1.1 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 26257 - name: grpc - - containerPort: 8080 - name: http - volumeMounts: - - name: datadir - mountPath: /cockroach/cockroach-data - command: - - "/bin/bash" - - "-ecx" - - | - # The use of qualified `hostname -f` is crucial: - # Other nodes aren't able to look up the unqualified hostname. - CRARGS=("start" "--logtostderr" "--insecure" "--host" "$(hostname -f)" "--http-host" "0.0.0.0" "--cache" "25%" "--max-sql-memory" "25%") - # We only want to initialize a new cluster (by omitting the join flag) - # if we're sure that we're the first node (i.e. index 0) and that - # there aren't any other nodes running as part of the cluster that - # this is supposed to be a part of (which indicates that a cluster - # already exists and we should make sure not to create a new one). - # It's fine to run without --join on a restart if there aren't any - # other nodes. - if [ ! "$(hostname)" == "cockroachdb-0" ] || \ - [ -e "/cockroach/cockroach-data/cluster_exists_marker" ] - then - # We don't join cockroachdb in order to avoid a node attempting - # to join itself, which currently doesn't work - # (https://github.com/cockroachdb/cockroach/issues/9625). - CRARGS+=("--join" "cockroachdb-public") - fi - exec /cockroach/cockroach ${CRARGS[*]} - # No pre-stop hook is required, a SIGTERM plus some time is all that's - # needed for graceful shutdown of a node. - terminationGracePeriodSeconds: 60 - volumes: - - name: datadir - persistentVolumeClaim: - claimName: datadir - volumeClaimTemplates: - - metadata: - name: datadir - spec: - storageClassName: openebs-jiva-default - accessModes: - - "ReadWriteOnce" - resources: - requests: - storage: 10G diff --git a/k8s/demo/cockroachDB/cockroachdb-svc.yaml b/k8s/demo/cockroachDB/cockroachdb-svc.yaml deleted file mode 100644 index 6da5e5ded3..0000000000 --- a/k8s/demo/cockroachDB/cockroachdb-svc.yaml +++ /dev/null @@ -1,55 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - # This service is meant to be used by clients of the database. It exposes a ClusterIP that will - # automatically load balance connections to the different database pods. - name: cockroachdb-public - labels: - app: cockroachdb -spec: - ports: - # The main port, served by gRPC, serves Postgres-flavor SQL, internode - # traffic and the cli. - - port: 26257 - targetPort: 26257 - name: grpc - # The secondary port serves the UI as well as health and debug endpoints. - - port: 8080 - targetPort: 8080 - name: http - selector: - app: cockroachdb ---- -apiVersion: v1 -kind: Service -metadata: - # This service only exists to create DNS entries for each pod in the stateful - # set such that they can resolve each other's IP addresses. It does not - # create a load-balanced ClusterIP and should not be used directly by clients - # in most circumstances. - name: cockroachdb - labels: - app: cockroachdb - annotations: - # This is needed to make the peer-finder work properly and to help avoid - # edge cases where instance 0 comes up after losing its data and needs to - # decide whether it should create a new cluster or try to join an existing - # one. If it creates a new cluster when it should have joined an existing - # one, we'd end up with two separate clusters listening at the same service - # endpoint, which would be very bad. - service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" - # Enable automatic monitoring of all instances when Prometheus is running in the cluster. - prometheus.io/scrape: "true" - prometheus.io/path: "_status/vars" - prometheus.io/port: "8080" -spec: - ports: - - port: 26257 - targetPort: 26257 - name: grpc - - port: 8080 - targetPort: 8080 - name: http - clusterIP: None - selector: - app: cockroachdb \ No newline at end of file diff --git a/k8s/demo/couchbase/README.md b/k8s/demo/couchbase/README.md deleted file mode 100644 index b33b11e38f..0000000000 --- a/k8s/demo/couchbase/README.md +++ /dev/null @@ -1,148 +0,0 @@ -# Couchbase - -This document demonstrates the deployment of Couchbase as a StatefulSet in a Kubernetes cluster. The user can spawn a Couchbase StatefulSet that will use OpenEBS as its persistent storage. - -## Deploy as a StatefulSet - -Deploying Couchbase as a StatefulSet provides the following benefits: - -- Stable unique network identifiers. -- Stable persistent storage. -- Ordered graceful deployment and scaling. -- Ordered graceful deletion and termination. - -## Deploy Couchbase with Persistent Storage - -Before getting started check the status of the cluster: - -```bash -ubuntu@kubemaster:~kubectl get nodes -NAME STATUS AGE VERSION -kubemaster Ready 3d v1.8.2 -kubeminion-01 Ready 3d v1.8.2 -kubeminion-02 Ready 3d v1.8.2 - -``` - -Download and apply the Couchbase YAML from OpenEBS repository: - -```bash -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/couchbase/couchbase-statefulset.yml -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/couchbase/couchbase-service.yml - - -ubuntu@kubemaster:~kubectl apply -f couchbase-statefulset.yml -ubuntu@kubemaster:~kubectl apply -f couchbase-service.yml -``` - -Get the status of running pods: - -```bash -ubuntu@kubemaster:~$ kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -default couchbase-0 1/1 Running 0 11h -default couchbase-1 1/1 Running 0 11h -default maya-apiserver-6fc5b4d59c-mg9k2 1/1 Running 0 3d -default openebs-provisioner-6d9b78696d-h647b 1/1 Running 0 3d -default pvc-16210b06-c7ba-11e7-892e-000c29119159-ctrl-78db5f845b-v7w5s 1/1 Running 0 11h -default pvc-16210b06-c7ba-11e7-892e-000c29119159-rep-94d9844df-78zsm 1/1 Running 0 11h -default pvc-16210b06-c7ba-11e7-892e-000c29119159-rep-94d9844df-rh4xs 1/1 Running 0 11h -default pvc-40e1b64f-c7ba-11e7-892e-000c29119159-ctrl-c54b6969b-75mjj 1/1 Running 0 11h -default pvc-40e1b64f-c7ba-11e7-892e-000c29119159-rep-6cd4655d87-6rgvm 1/1 Running 0 11h -default pvc-40e1b64f-c7ba-11e7-892e-000c29119159-rep-6cd4655d87-h7w9x 1/1 Running 0 11h -kube-system etcd-o-master01 1/1 Running 0 3d -kube-system kube-apiserver-o-master01 1/1 Running 0 3d -kube-system kube-controller-manager-o-master01 1/1 Running 0 3d -kube-system kube-dns-545bc4bfd4-m4ngc 3/3 Running 0 3d -kube-system kube-proxy-4ml5l 1/1 Running 0 3d -kube-system kube-proxy-7jlpf 1/1 Running 0 3d -kube-system kube-proxy-cxkpc 1/1 Running 0 3d -kube-system kube-scheduler-o-master01 1/1 Running 0 3d -kube-system weave-net-ctfk4 2/2 Running 0 3d -kube-system weave-net-dwszp 2/2 Running 0 3d -kube-system weave-net-pzbb7 2/2 Running 0 3d - -``` - -Get the status of running StatefulSets: - -```bash -ubuntu@kubemaster:~$ kubectl get statefulset -NAME DESIRED CURRENT AGE -couchbase 2 2 11h - -``` - -Get the status of underlying persistent volume being used by Couchbase StatefulSet: - -```bash -ubuntu@kubemaster:~$ kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -couchbase-data-couchbase-0 Bound pvc-16210b06-c7ba-11e7-892e-000c29119159 5G RWO openebs-standard 11h -couchbase-data-couchbase-1 Bound pvc-40e1b64f-c7ba-11e7-892e-000c29119159 5G RWO openebs-standard 11h - -``` - -Get the status of the services: - -```bash -ubuntu@kubemaster:~kubectl get svc -ME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -couchbase ClusterIP None 8091/TCP 11h -couchbase-ui NodePort 10.103.161.153 8091:30438/TCP 11h -kubernetes ClusterIP 10.96.0.1 443/TCP 3d -maya-apiserver-service ClusterIP 10.111.26.252 5656/TCP 3d - -``` - -## Launch Couchbase as Server - -The Couchbase service YAML, creates a NodePort service type for making the Couchbase server available outside the cluster. - -Get the node's IP Address which is running the Couchbase server: - -```bash -ubuntu@kubemaster:~kubectl describe pod couchbase-0 | grep Node: -Node: kubeminion-02/20.10.29.203 - -``` -Get the port number from the Couchbase UI service: - -```bash -ubuntu@kubemaster:~ kubectl describe svc couchbase-ui | grep NodePort: -NodePort: couchbase 30438/TCP - -``` - -Open the following URL in the browser: - -```bash -https://20.10.29.203:30438 - -``` - -_Note: The NodePort is dynamically allocated and may vary in a different deployment._ - -__Provide the _Username_ and _Password___ - -![Couchbase Login] - -The default Username is Administrator and Password is password. Enter the credentials to see the console. - -![Couchbase Cluster] - -__Click Server Nodes to see how many Couchbase nodes are part of the cluster. As expected, it shows only one node__ - -![Couchbase Server] - -__Click Data Buckets to see a sample bucket that was created as part of the image__ - -![Couchbase Databuckets] - -__Start Using Couchbase__ - - -[Couchbase Login]: images/couchbase_login.png -[Couchbase Cluster]: images/cluster_overview.png -[Couchbase Server]: images/server_nodes.png -[Couchbase Databuckets]: images/data_buckets.png diff --git a/k8s/demo/couchbase/couchbase-service.yml b/k8s/demo/couchbase/couchbase-service.yml deleted file mode 100644 index 0ad4967cf6..0000000000 --- a/k8s/demo/couchbase/couchbase-service.yml +++ /dev/null @@ -1,29 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: couchbase - labels: - app: couchbase -spec: - ports: - - port: 8091 - name: couchbase - # *.couchbase.default.svc.cluster.local - clusterIP: None - selector: - app: couchbase ---- -apiVersion: v1 -kind: Service -metadata: - name: couchbase-ui - labels: - app: couchbase-ui -spec: - ports: - - port: 8091 - name: couchbase - selector: - app: couchbase - sessionAffinity: ClientIP - type: NodePort diff --git a/k8s/demo/couchbase/couchbase-statefulset.yml b/k8s/demo/couchbase/couchbase-statefulset.yml deleted file mode 100644 index 8e132e28ea..0000000000 --- a/k8s/demo/couchbase/couchbase-statefulset.yml +++ /dev/null @@ -1,39 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: couchbase -spec: - serviceName: "couchbase" - replicas: 2 - selector: - matchLabels: - app: couchbase - template: - metadata: - labels: - app: couchbase - spec: - terminationGracePeriodSeconds: 0 - containers: - - name: couchbase - image: saturnism/couchbase:k8s-petset - ports: - - containerPort: 8091 - volumeMounts: - - name: couchbase-data - mountPath: /opt/couchbase/var - env: - - name: COUCHBASE_MASTER - value: "couchbase-0.couchbase.default.svc.cluster.local" - - name: AUTO_REBALANCE - value: "false" - volumeClaimTemplates: - - metadata: - name: couchbase-data - annotations: - volume.beta.kubernetes.io/storage-class: openebs-jiva-default - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5G diff --git a/k8s/demo/couchbase/images/cluster_overview.png b/k8s/demo/couchbase/images/cluster_overview.png deleted file mode 100644 index 4ffb3e8a70..0000000000 Binary files a/k8s/demo/couchbase/images/cluster_overview.png and /dev/null differ diff --git a/k8s/demo/couchbase/images/couchbase_login.png b/k8s/demo/couchbase/images/couchbase_login.png deleted file mode 100644 index 78fff1a48c..0000000000 Binary files a/k8s/demo/couchbase/images/couchbase_login.png and /dev/null differ diff --git a/k8s/demo/couchbase/images/data_buckets.png b/k8s/demo/couchbase/images/data_buckets.png deleted file mode 100644 index 37877e4b1f..0000000000 Binary files a/k8s/demo/couchbase/images/data_buckets.png and /dev/null differ diff --git a/k8s/demo/couchbase/images/server_nodes.png b/k8s/demo/couchbase/images/server_nodes.png deleted file mode 100644 index 89212be1f2..0000000000 Binary files a/k8s/demo/couchbase/images/server_nodes.png and /dev/null differ diff --git a/k8s/demo/crunchy-postgres/README.md b/k8s/demo/crunchy-postgres/README.md deleted file mode 100644 index b2c1f41706..0000000000 --- a/k8s/demo/crunchy-postgres/README.md +++ /dev/null @@ -1,216 +0,0 @@ -# Running Crunchy-Postgres with OpenEBS - -This tutorial provides detailed instructions to run a PostgreSQL StatefulSet with OpenEBS storage and perform -simple database operations to verify successful deployment. - -## Crunchy-Postgres - -The postgres container used in the StatefulSet is sourced from [CrunchyData](https://github.com/CrunchyData/crunchy-containers). CrunchyData provides cloud agnostic PostgreSQL container technology that is designed for production -workloads with cloud native High Availability, Disaster Recovery, and monitoring. - -## Prerequisite - -A fully configured (preferably, multi-node) Kubernetes cluster configured with the OpenEBS operator and OpenEBS -storage classes. - -``` -test@Master:~/crunchy-postgres$ kubectl get pods -n openebs -NAME READY STATUS RESTARTS AGE -cspc-operator-5569b48f6d-qr5r5 1/1 Running 0 6d21h -maya-apiserver-7d5b667cc-x6qsb 1/1 Running 0 6d21h -openebs-admission-server-7b85697d8d-26nqw 1/1 Running 0 6d21h -openebs-localpv-provisioner-9844ffcd5-mrbtr 1/1 Running 0 6d21h -openebs-ndm-8fkl6 1/1 Running 0 6d4h -openebs-ndm-j9hcz 1/1 Running 0 6d4h -openebs-ndm-operator-7c955ff9c9-7lcl8 1/1 Running 0 6d21h -openebs-ndm-r8dfk 1/1 Running 0 6d4h -openebs-provisioner-64ccdd9c54-jxrrq 1/1 Running 0 6d21h -openebs-snapshot-operator-8ffc4ffdd-8zzpb 2/2 Running 0 6d21h -``` - -## Deploy the Crunchy-Postgres StatefulSet with OpenEBS Storage - -The StatefulSet specification JSONs are available at OpenEBS/k8s/demo/crunchy-postgres. - -The number of replicas in the StatefulSet can be modified in the *set.json* file. This example uses 2 replicas, -which includes one master and one slave. The Postgres pods are configured as primary/master or as replica/slave by -a startup script which decides the role based on ordinality assigned to the pod. - -``` -{ - "apiVersion": "apps/v1", - "kind": "StatefulSet", - "metadata": { - "name": "pgset" - }, - "spec": { - "serviceName": "pgset", - "replicas": 2, - "selector": { - "matchLabels": { - "app": "pgset" - } - }, - "template": { - "metadata": { - "labels": { - "app": "pgset" - } - }, -: -``` - -Execute the following commands: - -``` -test@Master:~$ cd openebs/k8s/demo/crunchy-postgres/ - -test@Master:~/openebs/k8s/demo/crunchy-postgres$ ls -ltr -total 32 --rw-rw-r-- 1 test test 300 Nov 14 16:27 set-service.json --rw-rw-r-- 1 test test 97 Nov 14 16:27 set-sa.json --rw-rw-r-- 1 test test 558 Nov 14 16:27 set-replica-service.json --rw-rw-r-- 1 test test 555 Nov 14 16:27 set-master-service.json --rw-rw-r-- 1 test test 1879 Nov 14 16:27 set.json --rwxrwxr-x 1 test test 1403 Nov 14 16:27 run.sh --rw-rw-r-- 1 test test 1292 Nov 14 16:27 README.md --rwxrwxr-x 1 test test 799 Nov 14 16:27 cleanup.sh -``` - -``` -test@Master:~/crunchy-postgres$ ./run.sh -+++ dirname ./run.sh -++ cd . -++ pwd -+ DIR=/home/test/openebs/k8s/demo/crunchy-postgres -+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-sa.json -serviceaccount "pgset-sa" created -+ kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts -clusterrolebinding "permissive-binding" created -+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-service.json -service "pgset" created -+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-primary-service.json -service "pgset-primary" created -+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set-replica-service.json -service "pgset-replica" created -+ kubectl create -f /home/test/openebs/k8s/demo/crunchy-postgres/set.json -statefulset "pgset" created -``` - -Verify that all the OpenEBS persistent volumes are created and the Crunchy-Postgres services and pods are running. - -``` -test@Master:~/crunchy-postgres$ kubectl get statefulsets -NAME DESIRED CURRENT AGE -pgset 2 2 15m - -test@Master:~/crunchy-postgres$ kubectl get pods -NAME READY STATUS RESTARTS AGE -maya-apiserver-2245240594-ktfs2 1/1 Running 0 3h -openebs-provisioner-4230626287-t8pn9 1/1 Running 0 3h -pgset-0 1/1 Running 0 3m -pgset-1 1/1 Running 0 3m -pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-ctrl-3572426415-n8ctb 1/1 Running 0 3m -pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-rep-3113668378-9437w 1/1 Running 0 3m -pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-rep-3113668378-xnt12 1/1 Running 0 3m -pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-ctrl-2773298268-x3dlb 1/1 Running 0 3m -pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-rep-723453814-hpkw3 1/1 Running 0 3m -pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-rep-723453814-tpjqm 1/1 Running 0 3m - -test@Master:~/crunchy-postgres$ kubectl get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kubernetes 10.96.0.1 443/TCP 4h -maya-apiserver-service 10.98.249.191 5656/TCP 3h -pgset None 5432/TCP 14m -pgset-primary 10.104.32.113 5432/TCP 14m -pgset-replica 10.99.40.69 5432/TCP 14m -pvc-17e21bd3-c948-11e7-a157-000c298ff5fc-ctrl-svc 10.111.243.121 3260/TCP,9501/TCP 14m -pvc-1e96a86b-c948-11e7-a157-000c298ff5fc-ctrl-svc 10.102.138.94 3260/TCP,9501/TCP 13m - - -test@Master:~/crunchy-postgres$ kubectl get clusterrolebinding permissive-binding -NAME AGE -permissive-binding 15m -test@Master:~/crunchy-postgres$ -``` - -Note: It may take some time for the pods to start as the images must be pulled and instantiated. This is also -dependent on the network speed. - - -## Verify Successful Crunchy-Postgres Deployment - -The verification procedure can be carried out using the following steps: - -- Check cluster replication status between the Postgres primary and replica -- Create a table in the default database as Postgres user "testuser" on the primary -- Check data synchronization on the replica for the table you have created -- Verify that table is not created on the replica - -### Step-1: Install the PostgreSQL-Client - -Install the PostgreSQL Client Utility (psql) on any of the Kubernetes machines to perform database operations -from the command line. - -``` -sudo apt-get install postgresql-client -``` - -### Step-2: Verify Cluster Replication Status on Crunchy-Postgres Cluster - -Identify the IP Address of the primary (pgset-0) pod or the service (pgset-primary) and execute the following -query: - -``` -test@Master:~$ psql -h 10.47.0.3 -U testuser postgres -c 'select * from pg_stat_replication' - - pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_state ------+----------+-------------+------------------+-------------+-----------------+-------------+-------------------------------+--------------+-----------+-----------+-----------+-----------+------------+-----------+-----------+------------+---------------+------------ - 94 | 16391 | primaryuser | pgset-1 | 10.44.0.0 | | 60460 | 2017-11-14 09:29:21.990782-05 | | streaming | 0/3014278 | 0/3014278 | 0/3014278 | 0/3014278 | | | | 0 | async -(1 row) -``` - -The replica should be registered for "asynchronous" replication. - -### Step-3: Create a Table with Test Content on the Default Database - -These queries should be executed on the primary. - -``` -test@Master:~$ psql -h 10.47.0.3 -U testuser postgres -c 'create table foo(id int)' -Password for user testuser: -CREATE TABLE -``` -``` -test@Master:~/crunchy-postgres$ psql -h 10.47.0.3 -U testuser postgres -c 'insert into foo values (1)' -Password for user testuser: -INSERT 0 1 -``` - -### Step-4: Verify Data Synchronization on Replica - -Identify the IP Address of the replica (pgset-1) pod or the service (pgset-replica) and execute the following -command: - -``` -test@Master:~$ psql -h 10.44.0.6 -U testuser postgres -c 'table foo' -Password for user testuser: - id ----- - 1 -(1 row) -``` - -Verify that the table content is replicated successfully. - -### Step-5: Verify Database Write Restricted on Replica - -Attempt to create a new table on the replica, and verify that the creation is unsuccessful. - -``` -test@Master:~$ psql -h 10.44.0.6 -U testuser postgres -c 'create table bar(id int)' -Password for user testuser: -ERROR: cannot execute CREATE TABLE in a read-only transaction -``` - - diff --git a/k8s/demo/crunchy-postgres/cleanup.sh b/k8s/demo/crunchy-postgres/cleanup.sh deleted file mode 100755 index cdc37791b9..0000000000 --- a/k8s/demo/crunchy-postgres/cleanup.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -# Copyright 2017 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -kubectl delete statefulset pgset -kubectl delete sa pgset-sa -kubectl delete service pgset pgset-primary pgset-replica -kubectl delete pod pgset-0 pgset-1 -kubectl delete pvc pgdata-pgset-0 pgdata-pgset-1 -kubectl delete clusterrolebinding permissive-binding diff --git a/k8s/demo/crunchy-postgres/run.sh b/k8s/demo/crunchy-postgres/run.sh deleted file mode 100755 index 4610514384..0000000000 --- a/k8s/demo/crunchy-postgres/run.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash -x -# Copyright 2017 OpenEBS Authors -# Made minor modifications to make this work with OpenEBS -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -#source $CCPROOT/examples/envvars.sh - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -#$DIR/cleanup.sh - -# create the service account used in the containers -kubectl create -f $DIR/set-sa.json - -# For version Kube 1.6, we must allow the service account to perform -# a label command. For this example, OpenEBS opens up wide permissions -# for all service accounts. This is NOT for production! -kubectl create clusterrolebinding permissive-binding \ - --clusterrole=cluster-admin \ - --user=admin \ - --user=kubelet \ - --group=system:serviceaccounts - -# create the services for the example -kubectl create -f $DIR/set-service.json -kubectl create -f $DIR/set-primary-service.json -kubectl create -f $DIR/set-replica-service.json -kubectl create -f $DIR/set.json diff --git a/k8s/demo/crunchy-postgres/set-primary-service.json b/k8s/demo/crunchy-postgres/set-primary-service.json deleted file mode 100644 index 5c3b3f5cd9..0000000000 --- a/k8s/demo/crunchy-postgres/set-primary-service.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "pgset-primary", - "labels": { - "name": "pgset-primary" - } - }, - "spec": { - "ports": [{ - "protocol": "TCP", - "port": 5432, - "targetPort": 5432, - "nodePort": 0 - }], - "selector": { - "name": "pgset-primary" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - } -} diff --git a/k8s/demo/crunchy-postgres/set-replica-service.json b/k8s/demo/crunchy-postgres/set-replica-service.json deleted file mode 100644 index 3a5d06aa4c..0000000000 --- a/k8s/demo/crunchy-postgres/set-replica-service.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "pgset-replica", - "labels": { - "name": "pgset-replica" - } - }, - "spec": { - "ports": [{ - "protocol": "TCP", - "port": 5432, - "targetPort": 5432, - "nodePort": 0 - }], - "selector": { - "name": "pgset-replica" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - } -} diff --git a/k8s/demo/crunchy-postgres/set-sa.json b/k8s/demo/crunchy-postgres/set-sa.json deleted file mode 100644 index 30ec9fb8df..0000000000 --- a/k8s/demo/crunchy-postgres/set-sa.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "ServiceAccount", - "metadata": { - "name": "pgset-sa" - } -} diff --git a/k8s/demo/crunchy-postgres/set-service.json b/k8s/demo/crunchy-postgres/set-service.json deleted file mode 100644 index 4de3dfcbdd..0000000000 --- a/k8s/demo/crunchy-postgres/set-service.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "Service", - "metadata": { - "name": "pgset", - "labels": { - "app": "pgset" - } - }, - "spec": { - "ports": [ - { - "port": 5432, - "name": "web" - } - ], - "clusterIP": "None", - "selector": { - "app": "pgset" - } - } -} diff --git a/k8s/demo/crunchy-postgres/set.json b/k8s/demo/crunchy-postgres/set.json deleted file mode 100644 index eaa6f302cf..0000000000 --- a/k8s/demo/crunchy-postgres/set.json +++ /dev/null @@ -1,97 +0,0 @@ -{ - "apiVersion": "apps/v1", - "kind": "StatefulSet", - "metadata": { - "name": "pgset" - }, - "spec": { - "serviceName": "pgset", - "replicas": 2, - "selector": { - "matchLabels": { - "app": "pgset" - } - }, - "template": { - "metadata": { - "labels": { - "app": "pgset" - } - }, - "spec": { - "securityContext": - { - "fsGroup": 26 - }, - "containers": [ - { - "name": "pgset", - "image": "crunchydata/crunchy-postgres:centos7-10.0-1.6.0", - "ports": [ - { - "containerPort": 5432, - "name": "postgres" - } - ], - "env": [{ - "name": "PG_PRIMARY_USER", - "value": "primaryuser" - }, { - "name": "PGHOST", - "value": "/tmp" - }, { - "name": "PG_MODE", - "value": "set" - }, { - "name": "PG_PRIMARY_PASSWORD", - "value": "password" - }, { - "name": "PG_USER", - "value": "testuser" - }, { - "name": "PG_PASSWORD", - "value": "password" - }, { - "name": "PG_DATABASE", - "value": "userdb" - }, { - "name": "PG_ROOT_PASSWORD", - "value": "password" - }, { - "name": "PG_PRIMARY_PORT", - "value": "5432" - }, { - "name": "PG_PRIMARY_HOST", - "value": "pgset-primary" - }], - "volumeMounts": [ - { - "name": "pgdata", - "mountPath": "/pgdata", - "readOnly": false - } - ] - } - ] - } - }, - "volumeClaimTemplates": [ - { - "metadata": { - "name": "pgdata" - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "storageClassName": "openebs-jiva-default", - "resources": { - "requests": { - "storage": "400M" - } - } - } - } - ] - } -} diff --git a/k8s/demo/dbench/README.md b/k8s/demo/dbench/README.md deleted file mode 100644 index e8639ec0ca..0000000000 --- a/k8s/demo/dbench/README.md +++ /dev/null @@ -1,128 +0,0 @@ -This folder contains sample YAMLs for running bench marking tests -using fio scripts present at [openebs/fbench](https://github.com/openebs/fbench/). - -To get optimal performance, it is essential to tune the storage settings for the -type of workload and as well as customize the storage to run along side -application workloads. - -Depending on the storage engine of choice, the tunables will vary. `Jiva` being -the most simple one to use, I will start with that and then follow up with `cStor`. - -## Running bench marking tests using `Jiva` - -Jiva storage engine involves - a target pod that receives IO from the application -and then makes copies of data sends them synchronously to one or more replica pods -for persisting the data. The replica pods will write the data into host-path on -the node where they are running. The host-path is configured by a CR called -StoragePool, with default path as `/var/openebs`. - -Some of the ways to tune the `jiva` storage engine for benchmark tests. - -### Configure the `Jiva` storage pool host-path - -Edit the default StoragePool CR and modify the host-path to point to the -correct storage location. - -For instance, if I am using local ssds on GKE, the StoragePool would look like: - -``` -apiVersion: openebs.io/v1alpha1 -kind: StoragePool -metadata: - creationTimestamp: 2019-01-29T07:34:37Z - generation: 1 - name: default - resourceVersion: "4458" - selfLink: /apis/openebs.io/v1alpha1/storagepools/default - uid: 54ff58b7-2398-11e9-83af-42010a8001a6 -spec: - path: /mnt/disks/ssd0/ -``` - -Note that the `/mnt/disks/ssd0` should be formatted with `ext4` and be available -on all the nodes, where replica pods can be scheduled. - -### Configure the Replica Count - -In some cases, application or the underlying storage may be doing the replication. -The number of replicas can be controlled by setting the ReplicaCount policy in -the storage class as follows: -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-jiva-r1 - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi -``` - - -### Configure the nodes on which the target or the replica pods are scheduled. - -To avoid network latencies between the application to target to replica data flow, -it is possible to configure launching the application pods and associated target -and replica pods on the same node. - -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-jiva-r1 - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" - - name: TargetNodeSelector - value: |- - "kubernetes.io/hostname": "gke-kmova-perf-default-pool-8691dea0-512r" - - name: ReplicaNodeSelector - value: |- - "kubernetes.io/hostname": "gke-kmova-perf-default-pool-8691dea0-512r" -provisioner: openebs.io/provisioner-iscsi -``` - -The application also needs to have the `nodeSelector` pointing to the same -host as the above. - - -## Running bench marking tests using `cStor` - -cStor storage engine is the latest addition to the OpenEBS family. It differs -from the `Jiva` in the following aspects: -- Each Replica instance (called as cstor pool) can actually handle data from - multiple volumes. -- The Replica instance needs to be given access to a raw block device as - opposed to an `ext4` mounted path. - -Please refer to the https://docs.openebs.io for instantiating the cStor Pool -(aka Replica instances). - -Note: There is a cstor-sparse-pool that is launched by default as part of the -openebs installation, which will make use of sparse files from the OS disk. -Run performance benchmark tests on cstor-sparse pool can result in OS -disk running out of space and causing the systems to freeze. - -cStor also supports running a volume with single replica using the same configuration -like above (in `Jiva`). - -There is also going to be support for specifying `TargetNodeSelector` for -cStor Targets in the coming releases. However, if you are running 0.8.0, -to pin the target to a node, its deployment needs to be edited -to specify the `nodeSelector`. - -cStor target also allows for tuning the number of IO worker threads (called -`LU Workers`) and queue depth. For sequential workloads, having a single -LU worker performs better. These tunables will be exposed via the storage -policies similar to the replica count in 0.9. These values are translated -into a configuration file (istgt.conf) that resides within the target pod. -To modify them, - `kubectl exec -it -n openebs -c cstor-istgt /bin/bash` - - cd /usr/local/istgt - - sed -i '/Luworkers 6/c\ Luworkers 1' istgt.conf - - check for the istgt process `ps -aux` - - kill -9 pid diff --git a/k8s/demo/dbench/dbench-hostpath.yaml b/k8s/demo/dbench/dbench-hostpath.yaml deleted file mode 100644 index 17526ea4f0..0000000000 --- a/k8s/demo/dbench/dbench-hostpath.yaml +++ /dev/null @@ -1,32 +0,0 @@ ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: dbench-hp -spec: - template: - spec: - containers: - - name: dbench-hp - image: openebs/fbench:latest - imagePullPolicy: Always - env: - - name: DBENCH_MOUNTPOINT - value: /data - # - name: DBENCH_QUICK - # value: "yes" - - name: FIO_SIZE - value: 1G - - name: FIO_OFFSET_INCREMENT - value: 256M - # - name: FIO_DIRECT - # value: "0" - volumeMounts: - - name: dbench-pv - mountPath: /data - restartPolicy: Never - volumes: - - name: dbench-pv - hostPath: - path: /var/openebs/dbench-data - backoffLimit: 4 diff --git a/k8s/demo/dbench/dbench-jdefault-r1.yaml b/k8s/demo/dbench/dbench-jdefault-r1.yaml deleted file mode 100644 index 59aa5ae91a..0000000000 --- a/k8s/demo/dbench/dbench-jdefault-r1.yaml +++ /dev/null @@ -1,55 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: dbench-jd-pv-claim - annotations: - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" - - name: TargetNodeSelector - value: |- - "kubernetes.io/hostname": "gke-kmova-perf-default-pool-8691dea0-512r" - - name: ReplicaNodeSelector - value: |- - "kubernetes.io/hostname": "gke-kmova-perf-default-pool-8691dea0-512r" -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5Gi ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: dbench-jd -spec: - template: - spec: - nodeSelector: - kubernetes.io/hostname: gke-kmova-perf-default-pool-8691dea0-512r - containers: - - name: dbench-jd - image: openebs/fbench:latest - imagePullPolicy: Always - env: - - name: DBENCH_MOUNTPOINT - value: /data - # - name: DBENCH_QUICK - # value: "yes" - - name: FIO_SIZE - value: 1G - - name: FIO_OFFSET_INCREMENT - value: 256M - # - name: FIO_DIRECT - # value: "0" - volumeMounts: - - name: dbench-pv - mountPath: /data - restartPolicy: Never - volumes: - - name: dbench-pv - persistentVolumeClaim: - claimName: dbench-jd-pv-claim - backoffLimit: 4 diff --git a/k8s/demo/dbench/dbench-jdefault.yaml b/k8s/demo/dbench/dbench-jdefault.yaml deleted file mode 100644 index 496fc81dae..0000000000 --- a/k8s/demo/dbench/dbench-jdefault.yaml +++ /dev/null @@ -1,43 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: dbench-jd-pv-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5Gi ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: dbench-jd -spec: - template: - spec: - containers: - - name: dbench-jd - image: openebs/fbench:latest - imagePullPolicy: Always - env: - - name: DBENCH_MOUNTPOINT - value: /data - # - name: DBENCH_QUICK - # value: "yes" - - name: FIO_SIZE - value: 1G - - name: FIO_OFFSET_INCREMENT - value: 256M - # - name: FIO_DIRECT - # value: "0" - volumeMounts: - - name: dbench-pv - mountPath: /data - restartPolicy: Never - volumes: - - name: dbench-pv - persistentVolumeClaim: - claimName: dbench-jd-pv-claim - backoffLimit: 4 diff --git a/k8s/demo/dbench/dbench-localpv-hostdevice.yaml b/k8s/demo/dbench/dbench-localpv-hostdevice.yaml deleted file mode 100644 index 4c96a0887c..0000000000 --- a/k8s/demo/dbench/dbench-localpv-hostdevice.yaml +++ /dev/null @@ -1,46 +0,0 @@ ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: dbench-lpv-hd-claim -spec: - storageClassName: openebs-device - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5Gi ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: dbench-hd -spec: - template: - spec: - #nodeSelector: - # kubernetes.io/hostname: gke-kmova-helm-default-pool-2fc7a594-gtbb - containers: - - name: dbench-hd - image: openebs/fbench:latest - imagePullPolicy: Always - env: - - name: DBENCH_MOUNTPOINT - value: /data - # - name: DBENCH_QUICK - # value: "yes" - - name: FIO_SIZE - value: 1G - - name: FIO_OFFSET_INCREMENT - value: 256M - # - name: FIO_DIRECT - # value: "0" - volumeMounts: - - name: dbench-pv - mountPath: /data - restartPolicy: Never - volumes: - - name: dbench-pv - persistentVolumeClaim: - claimName: dbench-lpv-hd-claim - backoffLimit: 4 diff --git a/k8s/demo/dbench/dbench-localssd.yaml b/k8s/demo/dbench/dbench-localssd.yaml deleted file mode 100644 index 51d5fb1d47..0000000000 --- a/k8s/demo/dbench/dbench-localssd.yaml +++ /dev/null @@ -1,32 +0,0 @@ ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: dbench-localssd -spec: - template: - spec: - containers: - - name: dbench-localssd - image: openebs/fbench:latest - imagePullPolicy: Always - env: - - name: DBENCH_MOUNTPOINT - value: /data - # - name: DBENCH_QUICK - # value: "yes" - - name: FIO_SIZE - value: 175G - - name: FIO_OFFSET_INCREMENT - value: 256M - # - name: FIO_DIRECT - # value: "0" - volumeMounts: - - name: dbench-pv - mountPath: /data - restartPolicy: Never - volumes: - - name: dbench-pv - hostPath: - path: /mnt/disks/ssd0/dbench - backoffLimit: 4 diff --git a/k8s/demo/dbench/dbench-ns.yaml b/k8s/demo/dbench/dbench-ns.yaml deleted file mode 100644 index 49fe664b68..0000000000 --- a/k8s/demo/dbench/dbench-ns.yaml +++ /dev/null @@ -1,33 +0,0 @@ -apiVersion: batch/v1 -kind: Job -metadata: - name: dbench -spec: - template: - spec: - nodeSelector: - kubernetes.io/hostname: gke-kmova-perf-default-pool-8691dea0-512r - containers: - - name: dbench - image: openebs/fbench:latest - imagePullPolicy: Always - env: - - name: DBENCH_MOUNTPOINT - value: /data - # - name: DBENCH_QUICK - # value: "yes" - - name: FIO_SIZE - value: 1G - - name: FIO_OFFSET_INCREMENT - value: 256M - # - name: FIO_DIRECT - # value: "0" - volumeMounts: - - name: dbench-pv - mountPath: /data - restartPolicy: Never - volumes: - - name: dbench-pv - persistentVolumeClaim: - claimName: dbench-pv-claim - backoffLimit: 4 diff --git a/k8s/demo/dbench/dbench-r1-pvc.yaml b/k8s/demo/dbench/dbench-r1-pvc.yaml deleted file mode 100644 index 91eed8a7cf..0000000000 --- a/k8s/demo/dbench/dbench-r1-pvc.yaml +++ /dev/null @@ -1,11 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: dbench-pv-claim -spec: - storageClassName: openebs-cstor-r1 - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5Gi diff --git a/k8s/demo/dbench/dbench.yaml b/k8s/demo/dbench/dbench.yaml deleted file mode 100644 index 2457b01fd3..0000000000 --- a/k8s/demo/dbench/dbench.yaml +++ /dev/null @@ -1,49 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: dbench-pv-claim -spec: - storageClassName: openebs-cstor-disk - # storageClassName: openebs-jiva-default - # storageClassName: gp2 - # storageClassName: local-storage - # storageClassName: ibmc-block-bronze - # storageClassName: ibmc-block-silver - # storageClassName: ibmc-block-gold - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5Gi ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: dbench -spec: - template: - spec: - containers: - - name: dbench - image: openebs/fbench:latest - imagePullPolicy: Always - env: - - name: DBENCH_MOUNTPOINT - value: /data - # - name: DBENCH_QUICK - # value: "yes" - # - name: FIO_SIZE - # value: 1G - # - name: FIO_OFFSET_INCREMENT - # value: 256M - # - name: FIO_DIRECT - # value: "0" - volumeMounts: - - name: dbench-pv - mountPath: /data - restartPolicy: Never - volumes: - - name: dbench-pv - persistentVolumeClaim: - claimName: dbench-pv-claim - backoffLimit: 4 diff --git a/k8s/demo/dbench/sc-cstor-rep.yaml b/k8s/demo/dbench/sc-cstor-rep.yaml deleted file mode 100644 index b65e1854ab..0000000000 --- a/k8s/demo/dbench/sc-cstor-rep.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cstor-r1 - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: StoragePoolClaim - value: "r1-pool" - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/demo/dbench/spc-cstor-rep.yaml b/k8s/demo/dbench/spc-cstor-rep.yaml deleted file mode 100644 index 8eeddb3454..0000000000 --- a/k8s/demo/dbench/spc-cstor-rep.yaml +++ /dev/null @@ -1,11 +0,0 @@ ---- -apiVersion: openebs.io/v1alpha1 -kind: StoragePoolClaim -metadata: - name: r1-pool -spec: - name: r1-pool - type: disk - maxPools: 1 - poolSpec: - poolType: striped diff --git a/k8s/demo/efk/README.md b/k8s/demo/efk/README.md deleted file mode 100644 index 055d9f2c2c..0000000000 --- a/k8s/demo/efk/README.md +++ /dev/null @@ -1,87 +0,0 @@ -Apply the specs in the efk folder. -Make sure you have applied the Storage classes for OpenEBS. - -`kubectl apply -f openebs-sc.yaml` - -This EFK podspec uses Elasticsearch, Fluentd and Kibana to enable you to perform k8s cluster level logging. -The fluentd pods act as collectors, Elasticsearch as the document database and kibana as the dashboard for log visualization. - -The current podspec for Elasticsearch creates - 1) 3 master pods responsible for cluster management. - 2) 3 data pods for storing log data. - 3) 2 client pods for external access. - -The current Fluentd podspec reads journal logs for `kubelet` and cluster level logging by reading from `/var/log/containers` for pods running on the kubernetes cluster. - -#### Note: Make sure you install Elasticsearch while executing this usecase. Fluentd and Kibana require the publicly accessible Elastic search endpoint. - -## Verify Elastic search installation - -``` -curl 'http://10.105.105.41:9200' -{ - "name" : "es-client-2155074821-nxdkt", - "cluster_name" : "escluster", - "cluster_uuid" : "zAYA9ERGQgCEclvYHCsOsA", - "version" : { - "number" : "5.5.0", - "build_hash" : "260387d", - "build_date" : "2017-06-30T23:16:05.735Z", - "build_snapshot" : false, - "lucene_version" : "6.6.0" - }, - "tagline" : "You Know, for Search" -} - -curl 'http://10.105.105.41:9200/_cat/nodes?v' -ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name -10.44.0.2 41 41 0 0.00 0.03 0.08 m - es-master-2996564765-4c56v -10.36.0.1 43 18 0 0.07 0.05 0.05 i - es-client-2155074821-v0w31 -10.40.0.2 49 15 0 0.05 0.07 0.11 m * es-master-2996564765-zj0gc -10.47.0.3 43 20 0 0.13 0.11 0.13 i - es-client-2155074821-nxdkt -10.47.0.4 42 20 0 0.13 0.11 0.13 d - elasticsearch-data-2 -10.47.0.2 39 20 0 0.13 0.11 0.13 m - es-master-2996564765-rql6m -10.42.0.2 41 13 0 0.00 0.04 0.10 d - elasticsearch-data-1 -10.40.0.3 42 15 0 0.05 0.07 0.11 d - elasticsearch-data-0 - -curl -XPUT 'http://10.105.105.41:9200/customer?pretty&pretty' -{ - "acknowledged" : true, - "shards_acknowledged" : true -} - -curl -XGET 'http://10.105.105.41:9200/_cat/indices?v&pretty' -health status index uuid pri rep docs.count docs.deleted store.size pri.store.size -green open customer -Cort549Sn6q4gmbwicOMA 5 1 0 0 1.5kb 810b - -curl -XPUT 'http://10.105.105.41:9200/customer/external/1?pretty&pretty' -H 'Content-Type: application/json' -d' -{ -"name": "Daenerys Targaryen" -} -' -{ - "_index" : "customer", - "_type" : "external", - "_id" : "1", - "_version" : 1, - "result" : "created", - "_shards" : { - "total" : 2, - "successful" : 2, - "failed" : 0 - }, - "created" : true -} - -curl 'http://10.105.105.41:9200/customer/external/1?pretty&pretty' -{ - "_index" : "customer", - "_type" : "external", - "_id" : "1", - "_version" : 1, - "found" : true, - "_source" : { - "name" : "Daenerys Targaryen" - } -} -``` diff --git a/k8s/demo/efk/es/es-client-rc.yaml b/k8s/demo/efk/es/es-client-rc.yaml deleted file mode 100644 index 672fbc8556..0000000000 --- a/k8s/demo/efk/es/es-client-rc.yaml +++ /dev/null @@ -1,68 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: es-client - labels: - component: elasticsearch - role: client -spec: - replicas: 2 - selector: - matchLabels: - component: elasticsearch - template: - metadata: - labels: - component: elasticsearch - role: client - spec: - initContainers: - - name: init-sysctl - image: busybox - imagePullPolicy: IfNotPresent - command: ["sysctl", "-w", "vm.max_map_count=262144"] - securityContext: - privileged: true - containers: - - name: es-client - securityContext: - privileged: false - capabilities: - add: - - IPC_LOCK - - SYS_RESOURCE - image: quay.io/pires/docker-elasticsearch-kubernetes:5.5.0 - imagePullPolicy: Always - env: - - name: NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: "CLUSTER_NAME" - value: "myesdb" - - name: NODE_MASTER - value: "false" - - name: NODE_DATA - value: "false" - - name: HTTP_ENABLE - value: "true" - - name: "ES_JAVA_OPTS" - value: "-Xms256m -Xmx256m" - ports: - - containerPort: 9200 - name: http - protocol: TCP - - containerPort: 9300 - name: transport - protocol: TCP - volumeMounts: - - name: storage - mountPath: /data - volumes: - - emptyDir: - medium: "" - name: "storage" diff --git a/k8s/demo/efk/es/es-client-svc.yaml b/k8s/demo/efk/es/es-client-svc.yaml deleted file mode 100644 index 5edb54e007..0000000000 --- a/k8s/demo/efk/es/es-client-svc.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: elasticsearch - labels: - component: elasticsearch - role: client -spec: - type: LoadBalancer - selector: - component: elasticsearch - role: client - ports: - - name: http - port: 9200 - protocol: TCP \ No newline at end of file diff --git a/k8s/demo/efk/es/es-data-sts.yaml b/k8s/demo/efk/es/es-data-sts.yaml deleted file mode 100644 index b7691c903a..0000000000 --- a/k8s/demo/efk/es/es-data-sts.yaml +++ /dev/null @@ -1,71 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: elasticsearch-data - labels: - component: elasticsearch - role: data -spec: - serviceName: elasticsearch-data - replicas: 3 - selector: - matchLabels: - component: elasticsearch - template: - metadata: - labels: - component: elasticsearch - role: data - spec: - initContainers: - - name: init-sysctl - image: busybox - imagePullPolicy: IfNotPresent - command: ["sysctl", "-w", "vm.max_map_count=262144"] - securityContext: - privileged: true - containers: - - name: elasticsearch-data-pod - securityContext: - privileged: true - capabilities: - add: - - IPC_LOCK - image: quay.io/pires/docker-elasticsearch-kubernetes:5.5.0 - imagePullPolicy: Always - env: - - name: NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: "CLUSTER_NAME" - value: "myesdb" - - name: NODE_MASTER - value: "false" - - name: NODE_INGEST - value: "false" - - name: HTTP_ENABLE - value: "false" - - name: "ES_JAVA_OPTS" - value: "-Xms256m -Xmx256m" - ports: - - containerPort: 9300 - name: transport - protocol: TCP - volumeMounts: - - name: openebs-es-data - mountPath: /data - volumeClaimTemplates: - - metadata: - name: openebs-es-data - annotations: - volume.beta.kubernetes.io/storage-class: openebs-jiva-default - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5G diff --git a/k8s/demo/efk/es/es-data-svc.yaml b/k8s/demo/efk/es/es-data-svc.yaml deleted file mode 100644 index 7c49d5f403..0000000000 --- a/k8s/demo/efk/es/es-data-svc.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: elasticsearch-data - labels: - component: elasticsearch - role: data -spec: - clusterIP: None - selector: - component: elasticsearch - role: data - ports: - - name: transport - port: 9300 - protocol: TCP diff --git a/k8s/demo/efk/es/es-master-rc.yaml b/k8s/demo/efk/es/es-master-rc.yaml deleted file mode 100644 index 5b80b6d876..0000000000 --- a/k8s/demo/efk/es/es-master-rc.yaml +++ /dev/null @@ -1,69 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: es-master - labels: - component: elasticsearch - role: master -spec: - replicas: 3 - selector: - matchLabels: - component: elasticsearch - template: - metadata: - labels: - component: elasticsearch - role: master - spec: - initContainers: - - name: init-sysctl - image: busybox - imagePullPolicy: IfNotPresent - command: ["sysctl", "-w", "vm.max_map_count=262144"] - securityContext: - privileged: true - containers: - - name: es-master - securityContext: - privileged: false - capabilities: - add: - - IPC_LOCK - - SYS_RESOURCE - image: quay.io/pires/docker-elasticsearch-kubernetes:5.5.0 - imagePullPolicy: Always - env: - - name: NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: "CLUSTER_NAME" - value: "myesdb" - - name: "NUMBER_OF_MASTERS" - value: "2" - - name: NODE_MASTER - value: "true" - - name: NODE_INGEST - value: "false" - - name: NODE_DATA - value: "false" - - name: HTTP_ENABLE - value: "false" - - name: "ES_JAVA_OPTS" - value: "-Xms256m -Xmx256m" - ports: - - containerPort: 9300 - name: transport - protocol: TCP - volumeMounts: - - name: storage - mountPath: /data - volumes: - - emptyDir: - medium: "" - name: "storage" diff --git a/k8s/demo/efk/es/es-master-svc.yaml b/k8s/demo/efk/es/es-master-svc.yaml deleted file mode 100644 index f4c22c7a10..0000000000 --- a/k8s/demo/efk/es/es-master-svc.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: elasticsearch-discovery - labels: - component: elasticsearch - role: master -spec: - selector: - component: elasticsearch - role: master - ports: - - name: transport - port: 9300 - protocol: TCP \ No newline at end of file diff --git a/k8s/demo/efk/fluentd/fluentd-ds.yaml b/k8s/demo/efk/fluentd/fluentd-ds.yaml deleted file mode 100644 index 5765283c8d..0000000000 --- a/k8s/demo/efk/fluentd/fluentd-ds.yaml +++ /dev/null @@ -1,91 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: fluentd - namespace: kube-system - labels: - app: fluentd ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: fluentd - labels: - app: fluentd -rules: -- apiGroups: - - "" - resources: - - pods - - namespaces - verbs: - - get - - list - - watch ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: fluentd -roleRef: - kind: ClusterRole - name: fluentd - apiGroup: rbac.authorization.k8s.io -subjects: -- kind: ServiceAccount - name: fluentd - namespace: kube-system ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: fluentd - namespace: kube-system - labels: - app: fluentd -spec: - selector: - matchLabels: - app: fluentd - template: - metadata: - labels: - app: fluentd - spec: - serviceAccount: fluentd - serviceAccountName: fluentd - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule - containers: - - name: fluentd - image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch - env: - - name: FLUENT_ELASTICSEARCH_HOST - value: "elasticsearch.kube-logging.svc.cluster.local" - - name: FLUENT_ELASTICSEARCH_PORT - value: "9200" - - name: FLUENT_ELASTICSEARCH_SCHEME - value: "http" - - name: FLUENT_UID - value: "0" - resources: - limits: - memory: 512Mi - requests: - cpu: 100m - memory: 200Mi - volumeMounts: - - name: varlog - mountPath: /var/log - - name: varlibdockercontainers - mountPath: /var/lib/docker/containers - readOnly: true - terminationGracePeriodSeconds: 30 - volumes: - - name: varlog - hostPath: - path: /var/log - - name: varlibdockercontainers - hostPath: - path: /var/lib/docker/containers diff --git a/k8s/demo/efk/kibana/kibana-rc.yaml b/k8s/demo/efk/kibana/kibana-rc.yaml deleted file mode 100644 index d2c24860c6..0000000000 --- a/k8s/demo/efk/kibana/kibana-rc.yaml +++ /dev/null @@ -1,43 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: kibana - namespace: default - labels: - component: kibana -spec: - replicas: 1 - selector: - matchLabels: - component: kibana - template: - metadata: - labels: - component: kibana - spec: - containers: - - name: kibana - image: cfontes/kibana-xpack-less:5.5.0 - env: - - name: "CLUSTER_NAME" - value: "escluster" - - name: XPACK_SECURITY_ENABLED - value: 'false' - - name: XPACK_GRAPH_ENABLED - value: 'false' - - name: XPACK_ML_ENABLED - value: 'false' - - name: XPACK_REPORTING_ENABLED - value: 'false' -# Important that the IP address is changed to the co-ordinator node clusterIP. This will change for each setup. - - name: ELASTICSEARCH_URL - value: 'http://127.0.0.1:9200' - resources: - limits: - cpu: 1000m - requests: - cpu: 100m - ports: - - containerPort: 5601 - name: kibana - protocol: TCP diff --git a/k8s/demo/efk/kibana/kibana-svc.yaml b/k8s/demo/efk/kibana/kibana-svc.yaml deleted file mode 100644 index 66a7cf4f57..0000000000 --- a/k8s/demo/efk/kibana/kibana-svc.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: kibana - namespace: default - labels: - component: kibana -spec: - selector: - component: kibana - type: LoadBalancer - ports: - - name: http - port: 80 - targetPort: 5601 - protocol: TCP diff --git a/k8s/demo/fio/demo-cstor-limits-sc.yaml b/k8s/demo/fio/demo-cstor-limits-sc.yaml deleted file mode 100644 index af7b2dfd17..0000000000 --- a/k8s/demo/fio/demo-cstor-limits-sc.yaml +++ /dev/null @@ -1,23 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cstor-sparse-limits - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: StoragePoolClaim - value: "cstor-sparse-pool" - - name: TargetResourceRequests - value: |- - memory: 0.5Gi - cpu: 100m - - name: TargetResourceLimits - value: |- - memory: 1Gi - - name: AuxResourceLimits - value: |- - memory: 0.5Gi - cpu: 50m -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/demo/fio/demo-cstor-sparse-pool-limits.yaml b/k8s/demo/fio/demo-cstor-sparse-pool-limits.yaml deleted file mode 100644 index 6fbb39cbd6..0000000000 --- a/k8s/demo/fio/demo-cstor-sparse-pool-limits.yaml +++ /dev/null @@ -1,26 +0,0 @@ ---- -apiVersion: openebs.io/v1alpha1 -kind: StoragePoolClaim -metadata: - name: cstor-sparse-pool - annotations: - cas.openebs.io/config: | - - name: PoolResourceRequests - value: |- - memory: 1Gi - cpu: 100m - - name: PoolResourceLimits - value: |- - memory: 2Gi - - name: AuxResourceLimits - value: |- - memory: 0.5Gi - cpu: 50m -spec: - name: cstor-sparse-pool - type: sparse - maxPools: 3 - poolSpec: - poolType: striped - cacheFile: /tmp/cstor-sparse-pool.cache - overProvisioning: false diff --git a/k8s/demo/fio/demo-fio-cstor-sparse.yaml b/k8s/demo/fio/demo-fio-cstor-sparse.yaml deleted file mode 100644 index c991d535c4..0000000000 --- a/k8s/demo/fio/demo-fio-cstor-sparse.yaml +++ /dev/null @@ -1,37 +0,0 @@ ---- -apiVersion: v1 -kind: Pod -metadata: - name: fio-cstor-sparse - labels: - name: fio-cstor-sparse -spec: - containers: - - resources: - limits: - cpu: 0.5 - name: fio-cstor-sparse - image: openebs/tests-fio - command: ["/bin/bash"] - args: ["-c", "./fio_runner.sh --template file/basic-rw --size 256m --duration 36000; exit 0"] - tty: true - volumeMounts: - - mountPath: /datadir - name: datavol - volumes: - - name: datavol - persistentVolumeClaim: - claimName: fio-cstor-sparse-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: fio-cstor-sparse-claim -spec: - storageClassName: openebs-cstor-sparse - accessModes: - - ReadWriteOnce - resources: - requests: - storage: "4G" - diff --git a/k8s/demo/fio/demo-fio-jiva-1r.yaml b/k8s/demo/fio/demo-fio-jiva-1r.yaml deleted file mode 100644 index 63867d1892..0000000000 --- a/k8s/demo/fio/demo-fio-jiva-1r.yaml +++ /dev/null @@ -1,36 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: fio-jiva-1r - labels: - name: fio-jiva -spec: - containers: - - resources: - limits: - cpu: 0.5 - name: fio-jiva - image: openebs/tests-fio - command: ["/bin/bash"] - args: ["-c", "./fio_runner.sh --template file/basic-rw --size 256m --duration 6000; exit 0"] - tty: true - volumeMounts: - - mountPath: /datadir - name: datavol - volumes: - - name: datavol - persistentVolumeClaim: - claimName: fio-jiva-1r-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: fio-jiva-1r-claim -spec: - storageClassName: openebs-jiva-fio-1r - accessModes: - - ReadWriteOnce - resources: - requests: - storage: "4G" - diff --git a/k8s/demo/fio/demo-fio-jiva-taa.yaml b/k8s/demo/fio/demo-fio-jiva-taa.yaml deleted file mode 100644 index 20292bead1..0000000000 --- a/k8s/demo/fio/demo-fio-jiva-taa.yaml +++ /dev/null @@ -1,39 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: fio-jiva - labels: - name: fio-jiva - openebs.io/target-affinity: fio-jiva -spec: - containers: - - resources: - limits: - cpu: 0.5 - name: fio-jiva - image: openebs/tests-fio - command: ["/bin/bash"] - args: ["-c", "./fio_runner.sh --template file/basic-rw --size 256m --duration 6000; exit 0"] - tty: true - volumeMounts: - - mountPath: /datadir - name: datavol - volumes: - - name: datavol - persistentVolumeClaim: - claimName: fio-jiva-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: fio-jiva-claim - labels: - openebs.io/target-affinity: fio-jiva -spec: - storageClassName: openebs-jiva-fio-taa - accessModes: - - ReadWriteOnce - resources: - requests: - storage: "4G" - diff --git a/k8s/demo/fio/demo-fio-jiva.yaml b/k8s/demo/fio/demo-fio-jiva.yaml deleted file mode 100644 index 860b4a9773..0000000000 --- a/k8s/demo/fio/demo-fio-jiva.yaml +++ /dev/null @@ -1,36 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: fio-jiva - labels: - name: fio-jiva -spec: - containers: - - resources: - limits: - cpu: 0.5 - name: fio-jiva - image: openebs/tests-fio - command: ["/bin/bash"] - args: ["-c", "./fio_runner.sh --template file/basic-rw --size 1024m --duration 6000; exit 0"] - tty: true - volumeMounts: - - mountPath: /datadir - name: datavol - volumes: - - name: datavol - persistentVolumeClaim: - claimName: fio-jiva-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: fio-jiva-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: "4G" - diff --git a/k8s/demo/fio/demo-jiva-1r-sc.yaml b/k8s/demo/fio/demo-jiva-1r-sc.yaml deleted file mode 100644 index 70dfe2de2a..0000000000 --- a/k8s/demo/fio/demo-jiva-1r-sc.yaml +++ /dev/null @@ -1,12 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-jiva-fio-1r - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/demo/fio/demo-jiva-3r-limits-sc.yaml b/k8s/demo/fio/demo-jiva-3r-limits-sc.yaml deleted file mode 100644 index 4747ca1b09..0000000000 --- a/k8s/demo/fio/demo-jiva-3r-limits-sc.yaml +++ /dev/null @@ -1,29 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-jiva-limits - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: TargetResourceRequests - value: |- - memory: 0.5Gi - cpu: 100m - - name: TargetResourceLimits - value: |- - memory: 1Gi - - name: ReplicaResourceRequests - value: |- - memory: 0.5Gi - cpu: 100m - - name: ReplicaResourceLimits - value: |- - memory: 2Gi - cpu: 200m - - name: AuxResourceLimits - value: |- - memory: 0.5Gi - cpu: 50m -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/demo/fio/node-exporter.yaml b/k8s/demo/fio/node-exporter.yaml deleted file mode 100644 index 53a1756ceb..0000000000 --- a/k8s/demo/fio/node-exporter.yaml +++ /dev/null @@ -1,77 +0,0 @@ -# node-exporter will be launch as daemonset. -apiVersion: extensions/v1beta1 -kind: DaemonSet -metadata: - name: node-exporter -spec: - template: - metadata: - labels: - app: node-exporter - name: node-exporter - spec: - containers: - - image: prom/node-exporter:v0.16.0 - name: node-exporter - ports: - - containerPort: 9100 - hostPort: 9100 - name: scrape - resources: - requests: - # A memory request of 250M means it will try to ensure minimum - # 250MB RAM . - memory: "128M" - # A cpu request of 128m means it will try to ensure minimum - # .125 CPU; where 1 CPU means : - # 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading - cpu: "128m" - limits: - memory: "700M" - cpu: "500m" - volumeMounts: - # All the application data stored in data-disk - - name: data-disk - mountPath: /data-disk - readOnly: true - # Root disk is where OS(Node) is installed - - name: root-disk - mountPath: /root-disk - readOnly: true - # The Kubernetes scheduler’s default behavior works well for most cases - # -- for example, it ensures that pods are only placed on nodes that have - # sufficient free resources, it ties to spread pods from the same set - # (ReplicaSet, StatefulSet, etc.) across nodes, it tries to balance out - # the resource utilization of nodes, etc. - # - # But sometimes you want to control how your pods are scheduled. For example, - # perhaps you want to ensure that certain pods only schedule on nodes with - # specialized hardware, or you want to co-locate services that communicate - # frequently, or you want to dedicate a set of nodes to a particular set of - # users. Ultimately, you know much more about how your applications should be - # scheduled and deployed than Kubernetes ever will. - # - # “taints and tolerations,” allows you to mark (“taint”) a node so that no - # pods can schedule onto it unless a pod explicitly “tolerates” the taint. - # toleration is particularly useful for situations where most pods in - # the cluster should avoid scheduling onto the node. In our case we want - # node-exporter to run on master node also i.e, we want to collect metrics - # from master node. That's why tolerations added. - # if removed master's node metrics can't be scrapped by prometheus. - tolerations: - - effect: NoSchedule - operator: Exists - volumes: - # A hostPath volume mounts a file or directory from the host node’s - # filesystem.For example, some uses for a hostPath are: - # running a container that needs access to Docker internals; use a hostPath - # of /var/lib/docker - # running cAdvisor in a container; use a hostPath of /dev/cgroups - - name: data-disk - hostPath: - path: /localdata - - name: root-disk - hostPath: - path: / - hostNetwork: true - hostPID: true diff --git a/k8s/demo/galera-xtradb-cluster/deployments/README.md b/k8s/demo/galera-xtradb-cluster/deployments/README.md deleted file mode 100644 index 24fc67c7d2..0000000000 --- a/k8s/demo/galera-xtradb-cluster/deployments/README.md +++ /dev/null @@ -1,265 +0,0 @@ -# Running Percona Galera Cluster with OpenEBS - -This tutorial provides detailed instructions to perform the following tasks : - -- Run a 3-node Percona Galera cluster with OpenEBS storage in a Kubernetes environment -- Test the data replication across the Percona Xtradb mysql instances. - -## Galera Cluster - -Percona XtraDB Cluster is an active/active high availability and high scalability open source solution for MySQL clustering. -It integrates Percona Server and Percona XtraBackup with the Codership Galera library of MySQL high availability solutions in -a single package. This folder consists of the k8s deployment specification YAMLs to setup the Galera cluster. These include: - -- A cluster service YAML which can be used for client connections (pxc-cluster) -- The node deployment and service specification YAMLs to setup a 3-node replication cluster (pxc-node) - -The image used in these pods is ```capttofu/percona_xtradb_cluster_5_6:beta```. When the deployment is created, the following -activities occur in the given order. - -- Start the Percona Xtradb containers -- Run an entrypoint script that: - - Installs the MySQL system tables - - Sets up users - - Build up a list of servers that is used with the galera parameter wsrep_cluster_address - This is a list of running nodes that Galera uses for election of a node to obtain SST (Single State Transfer). - -## Prerequisite - -A fully configured multi-node Kubernetes cluster configured with the OpenEBS operator and OpenEBS storage classes. -For instructions on applying the OpenEBS operator and recommended system configuration refer the Prerequisites section, -Step 1 and Step 2 of the mongodb [README](https://github.com/openebs/openebs/tree/master/k8s/demo/mongodb). - -## Deploy the Percona Galera Cluster with OpenEBS storage - -The deployment specification YAMLs are available at OpenEBS/k8s/demo/galera-xtradb-cluster/deployments. -Execute the following commands in the given order: - -``` -test@Master:~/openebs$ cd k8s/demo/galera-xtradb-cluster/ -test@Master:~/openebs/k8s/demo/galera-xtradb-cluster$ ls -ltr -total 16 --rw-rw-r-- 1 test test 1802 Oct 30 17:44 pxc-node3.yaml --rw-rw-r-- 1 test test 1802 Oct 30 17:44 pxc-node2.yaml --rw-rw-r-- 1 test test 1797 Oct 30 17:44 pxc-node1.yaml --rw-rw-r-- 1 test test 174 Oct 30 17:44 pxc-cluster-service.yaml -``` - -``` -test@Master:~/openebs/k8s/demo/galera-xtradb-cluster$ kubectl apply -f pxc-cluster-service.yaml -service "pxc-cluster" created -testk@Master:~/openebs/k8s/demo/galera-xtradb-cluster$ -``` - -``` -testk@Master:~/openebs/k8s/demo/galera-xtradb-cluster$ kubectl apply -f pxc-node1.yaml -service "pxc-node1" created -deployment "pxc-node1" created -persistentvolumeclaim "datadir-claim-1" created -``` - -Wait until the pxc-node1 YAML is processed. Repeat the step with pxc-node2 and pxc-node3 YAMLs. - -Verify that all the replicas are up and running: - -``` -test@Master:~/galera-deployment$ kubectl get pods -NAME READY STATUS RESTARTS AGE -maya-apiserver-2245240594-r7mj7 1/1 Running 0 2d -openebs-provisioner-4230626287-nr6h4 1/1 Running 0 2d -pvc-235b15a5-bd1f-11e7-9be8-000c298ff5fc-ctrl-2525793473-nv88z 1/1 Running 0 12m -pvc-235b15a5-bd1f-11e7-9be8-000c298ff5fc-rep-144100677-2rm8b 1/1 Running 0 12m -pvc-235b15a5-bd1f-11e7-9be8-000c298ff5fc-rep-144100677-gmn51 1/1 Running 0 12m -pvc-82885a9c-bd1e-11e7-9be8-000c298ff5fc-ctrl-2555717164-sspqn 1/1 Running 0 16m -pvc-82885a9c-bd1e-11e7-9be8-000c298ff5fc-rep-3501200001-p9778 1/1 Running 0 16m -pvc-82885a9c-bd1e-11e7-9be8-000c298ff5fc-rep-3501200001-x3nxs 1/1 Running 0 16m -pvc-cc94c5eb-bd1f-11e7-9be8-000c298ff5fc-ctrl-15702123-tn460 1/1 Running 0 7m -pvc-cc94c5eb-bd1f-11e7-9be8-000c298ff5fc-rep-4137665767-0lhjb 1/1 Running 0 7m -pvc-cc94c5eb-bd1f-11e7-9be8-000c298ff5fc-rep-4137665767-h8r6j 1/1 Running 0 7m -pxc-node1-2984138107-zjf22 1/1 Running 0 16m -pxc-node2-1007987438-q831l 1/1 Running 0 12m -pxc-node3-82203929-mh5p9 1/1 Running 0 7m -``` - -## Deployment Guidelines - -- OpenEBS recommends creating the Galera cluster with at least 3 nodes/replicas. Go to the following URL for details: -https://www.percona.com/blog/2015/06/23/percona-xtradb-cluster-pxc-how-many-nodes-do-you-need/. - -- It is important to deploy the service/pod for primary node first and wait for it to be processed before starting the -secondary/other nodes. Deploying all YAMLs together can cause the pods to restart repeatedly. Th reason stated in Kubernetes -documentation is: - - *If there is a node in wsrep_cluster_address without a backing galera node there will be nothing to obtain SST from which - will cause the node to shut itself down and the container in question to exit and relaunch.* - - -## Test Replication in the Galera Cluster - -- Check the replication cluster size on any of the nodes. - -``` -mysql> show status like 'wsrep_cluster_size'; -+--------------------+-------+ -| Variable_name | Value | -+--------------------+-------+ -| wsrep_cluster_size | 3 | -+--------------------+-------+ -1 row in set (0.01 sec) -``` - -- On the pxc-node1, create a test database with some content. - -``` -test@Master:~/galera-deployment$ kubectl exec -it pxc-node1-2984138107-zjf22 /bin/bash -root@pxc-node1-2984138107-zjf22:/# mysql -uroot -p -h pxc-cluster -Enter password: -Welcome to the MySQL monitor. Commands end with ; or \g. -Your MySQL connection id is 5 -Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 43abf03, WSREP version 25.11, wsrep_25.11 - -Copyright (c) 2009-2015 Percona LLC and/or its affiliates -Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. - -Oracle is a registered trademark of Oracle Corporation and/or its -affiliates. Other names may be trademarks of their respective -owners. - -Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - -mysql> create database testdb; -Query OK, 1 row affected (0.10 sec) - -mysql> use testdb; -Database changed - -mysql> CREATE TABLE Hardware (Name VARCHAR(20),HWtype VARCHAR(20),Model VARCHAR(20)); -Query OK, 0 rows affected (0.11 sec) - -mysql> INSERT INTO Hardware (Name,HWtype,Model) VALUES ('TestBox','Server','DellR820'); -Query OK, 1 row affected (0.06 sec) - -mysql> select * from Hardware; -+---------+--------+----------+ -| Name | HWtype | Model | -+---------+--------+----------+ -| TestBox | Server | DellR820 | -+---------+--------+----------+ -1 row in set (0.00 sec) - -mysql> exit -Bye -``` - -- Verify that this data is synchronized on the other nodes, for example, node2. - -``` -test@Master:~/galera-deployment$ kubectl exec -it pxc-node2-1007987438-q831l /bin/bash -root@pxc-node2-1007987438-q831l:/# -root@pxc-node2-1007987438-q831l:/# -root@pxc-node2-1007987438-q831l:/# mysql -uroot -p -h pxc-cluster -Enter password: -Welcome to the MySQL monitor. Commands end with ; or \g. -Your MySQL connection id is 4 -Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 43abf03, WSREP version 25.11, wsrep_25.11 - -Copyright (c) 2009-2015 Percona LLC and/or its affiliates -Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. - -Oracle is a registered trademark of Oracle Corporation and/or its -affiliates. Other names may be trademarks of their respective -owners. - -Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - -mysql> show databases; -+--------------------+ -| Database | -+--------------------+ -| information_schema | -| mysql | -| performance_schema | -| test | -| testdb | -+--------------------+ -5 rows in set (0.00 sec) -mysql> use testdb; -Database changed -mysql> show tables; -+------------------+ -| Tables_in_testdb | -+------------------+ -| Hardware | -+------------------+ -1 row in set (0.00 sec) - -mysql> select * from Hardware; -+---------+--------+----------+ -| Name | HWtype | Model | -+---------+--------+----------+ -| TestBox | Server | DellR820 | -+---------+--------+----------+ -1 row in set (0.00 sec) - -mysql> exit -Bye -``` - -- Verify the multi-master capability of the cluster, by writing some additional tables into the db from any node other than -node1, for example, node3. - -``` -test@Master:~/galera-deployment$ kubectl exec -it pxc-node3-82203929-mh5p9 /bin/bash -root@pxc-node3-82203929-mh5p9:/# -root@pxc-node3-82203929-mh5p9:/# -root@pxc-node3-82203929-mh5p9:/# mysql -uroot -p -h pxc-cluster; -Enter password: -Welcome to the MySQL monitor. Commands end with ; or \g. -Your MySQL connection id is 6 -Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 43abf03, WSREP version 25.11, wsrep_25.11 - -Copyright (c) 2009-2015 Percona LLC and/or its affiliates -Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. - -Oracle is a registered trademark of Oracle Corporation and/or its -affiliates. Other names may be trademarks of their respective -owners. - -Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - -mysql> - -mysql> show databases; -+--------------------+ -| Database | -+--------------------+ -| information_schema | -| mysql | -| performance_schema | -| test | -| testdb | -+--------------------+ -5 rows in set (0.00 sec) - -mysql> use testdb; -Reading table information for completion of table and column names -You can turn off this feature to get a quicker startup with -A - -Database changed -mysql> - -mysql> INSERT INTO Hardware (Name,HWtype,Model) VALUES ('ProdBox','Server','DellR720'); -Query OK, 1 row affected (0.03 sec) - -mysql> select * from Hardware; -+---------+--------+----------+ -| Name | HWtype | Model | -+---------+--------+----------+ -| TestBox | Server | DellR820 | -| ProdBox | Server | DellR720 | -+---------+--------+----------+ -2 rows in set (0.00 sec) - -mysql> exit -Bye -``` diff --git a/k8s/demo/galera-xtradb-cluster/deployments/pxc-cluster-service.yaml b/k8s/demo/galera-xtradb-cluster/deployments/pxc-cluster-service.yaml deleted file mode 100644 index f0bfd5e9a8..0000000000 --- a/k8s/demo/galera-xtradb-cluster/deployments/pxc-cluster-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: pxc-cluster - labels: - unit: pxc-cluster -spec: - ports: - - port: 3306 - name: mysql - selector: - unit: pxc-cluster \ No newline at end of file diff --git a/k8s/demo/galera-xtradb-cluster/deployments/pxc-node1.yaml b/k8s/demo/galera-xtradb-cluster/deployments/pxc-node1.yaml deleted file mode 100644 index 2c2dd0d602..0000000000 --- a/k8s/demo/galera-xtradb-cluster/deployments/pxc-node1.yaml +++ /dev/null @@ -1,81 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: pxc-node1 - labels: - node: pxc-node1 -spec: - ports: - - port: 3306 - name: mysql - - port: 4444 - name: state-snapshot-transfer - - port: 4567 - name: replication-traffic - - port: 4568 - name: incremental-state-transfer - selector: - node: pxc-node1 ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pxc-node1 - labels: - name: pxc-node1 -spec: - replicas: 1 - selector: - matchLabels: - node: pxc-node1 - template: - metadata: - labels: - node: pxc-node1 - unit: pxc-cluster - spec: - containers: - - resources: - limits: - cpu: 0.3 - image: capttofu/percona_xtradb_cluster_5_6:beta - name: pxc-node1 - ports: - - containerPort: 3306 - - containerPort: 4444 - - containerPort: 4567 - - containerPort: 4568 - env: - - name: GALERA_CLUSTER - value: "true" - - name: WSREP_CLUSTER_ADDRESS - value: gcomm:// - - name: WSREP_SST_USER - value: sst - - name: WSREP_SST_PASSWORD - value: sst - - name: MYSQL_USER - value: mysql - - name: MYSQL_PASSWORD - value: mysql - - name: MYSQL_ROOT_PASSWORD - value: c-krit - volumeMounts: - - mountPath: /var/lib - name: datadir - volumes: - - name: datadir - persistentVolumeClaim: - claimName: datadir-claim-1 ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: datadir-claim-1 -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G diff --git a/k8s/demo/galera-xtradb-cluster/deployments/pxc-node2.yaml b/k8s/demo/galera-xtradb-cluster/deployments/pxc-node2.yaml deleted file mode 100644 index a556a54b32..0000000000 --- a/k8s/demo/galera-xtradb-cluster/deployments/pxc-node2.yaml +++ /dev/null @@ -1,84 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: pxc-node2 - labels: - node: pxc-node2 -spec: - ports: - - port: 3306 - name: mysql - - port: 4444 - name: state-snapshot-transfer - - port: 4567 - name: replication-traffic - - port: 4568 - name: incremental-state-transfer - selector: - node: pxc-node2 - ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pxc-node2 - labels: - name: pxc-node2 -spec: - replicas: 1 - selector: - matchLabels: - node: pxc-node2 - template: - metadata: - labels: - node: pxc-node2 - unit: pxc-cluster - spec: - containers: - - resources: - limits: - cpu: 0.3 - image: capttofu/percona_xtradb_cluster_5_6:beta - name: pxc-node2 - ports: - - containerPort: 3306 - - containerPort: 4444 - - containerPort: 4567 - - containerPort: 4568 - env: - - name: GALERA_CLUSTER - value: "true" - - name: WSREP_CLUSTER_ADDRESS - value: gcomm:// - - name: WSREP_SST_USER - value: sst - - name: WSREP_SST_PASSWORD - value: sst - - name: MYSQL_USER - value: mysql - - name: MYSQL_PASSWORD - value: mysql - - name: MYSQL_ROOT_PASSWORD - value: c-krit - volumeMounts: - - mountPath: /var/lib - name: datadir - volumes: - - name: datadir - persistentVolumeClaim: - claimName: datadir-claim-2 ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: datadir-claim-2 -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G - - diff --git a/k8s/demo/galera-xtradb-cluster/deployments/pxc-node3.yaml b/k8s/demo/galera-xtradb-cluster/deployments/pxc-node3.yaml deleted file mode 100644 index 6121562598..0000000000 --- a/k8s/demo/galera-xtradb-cluster/deployments/pxc-node3.yaml +++ /dev/null @@ -1,84 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: pxc-node3 - labels: - node: pxc-node3 -spec: - ports: - - port: 3306 - name: mysql - - port: 4444 - name: state-snapshot-transfer - - port: 4567 - name: replication-traffic - - port: 4568 - name: incremental-state-transfer - selector: - node: pxc-node3 - ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pxc-node3 - labels: - name: pxc-node3 -spec: - replicas: 1 - selector: - matchLabels: - node: pxc-node3 - template: - metadata: - labels: - node: pxc-node3 - unit: pxc-cluster - spec: - containers: - - resources: - limits: - cpu: 0.3 - image: capttofu/percona_xtradb_cluster_5_6:beta - name: pxc-node3 - ports: - - containerPort: 3306 - - containerPort: 4444 - - containerPort: 4567 - - containerPort: 4568 - env: - - name: GALERA_CLUSTER - value: "true" - - name: WSREP_CLUSTER_ADDRESS - value: gcomm:// - - name: WSREP_SST_USER - value: sst - - name: WSREP_SST_PASSWORD - value: sst - - name: MYSQL_USER - value: mysql - - name: MYSQL_PASSWORD - value: mysql - - name: MYSQL_ROOT_PASSWORD - value: c-krit - volumeMounts: - - mountPath: /var/lib - name: datadir - volumes: - - name: datadir - persistentVolumeClaim: - claimName: datadir-claim-3 ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: datadir-claim-3 -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G - - diff --git a/k8s/demo/jenkins/README.md b/k8s/demo/jenkins/README.md deleted file mode 100644 index 6944680cdd..0000000000 --- a/k8s/demo/jenkins/README.md +++ /dev/null @@ -1,124 +0,0 @@ -# Jenkins - -This documents demonstrates the deployment of Jenkins as a pod in a Kubernetes Cluster. The user can spawn a Jenkins deployment that will use OpenEBS as its persistent storage. - -## Deploy as a Pod - -Deploying Jenkins as a pod provides the following benefits: - -- Isolates different jobs from one another. -- Quickly clean a job’s workspace. -- Dynamically deploy or schedule jobs with Kubernetes pods. -- Allows increased resource utilization and efficiency. - -## Deploy Jenkins Pod with Persistent Storage - -Before getting started check the status of the cluster: - -```bash -ubuntu@kubemaster:~kubectl get nodes -NAME STATUS AGE VERSION -kubemaster Ready 3d v1.7.5 -kubeminion-01 Ready 3d v1.7.5 -kubeminion-02 Ready 3d v1.7.5 - -``` - -Download and apply the Jenkins YAML from OpenEBS repo: - -```bash -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/jenkins/jenkins.yml - -ubuntu@kubemaster:~kubectl apply -f jenkins.yml -``` - -Get the status of running pods: - -```bash -ubuntu@kubemaster:~kubectl get pods -NAME READY STATUS RESTARTS AGE -jenkins-2748455067-85jv2 1/1 Running 0 9m -maya-apiserver-3416621614-r4821 1/1 Running 0 17m -openebs-provisioner-4230626287-7kjt4 1/1 Running 0 17m -pvc-c52aa2d0-bcbc-11e7-a3ad-021c6f7dbe9d-ctrl-1457148150-v6ccz 1/1 Running 0 9m -pvc-c52aa2d0-bcbc-11e7-a3ad-021c6f7dbe9d-rep-2977732037-kqv6f 1/1 Running 0 9m -pvc-c52aa2d0-bcbc-11e7-a3ad-021c6f7dbe9d-rep-2977732037-s6g2s 1/1 Running 0 9m - -``` - -Get the status of underlying persistent volume being used by Jenkins deployment: - -```bash -ubuntu@kubemaster:~kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE -jenkins-claim Bound pvc-c52aa2d0-bcbc-11e7-a3ad-021c6f7dbe9d 5G RWO openebs-standard 12m - -``` - -Get the status of jenkins service: - -```bash -ubuntu@kubemaster:~kubectl get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -jenkins-svc 10.107.147.241 80:32540/TCP 25m -kubernetes 10.96.0.1 443/TCP 3d -maya-apiserver-service 10.97.14.255 5656/TCP 33m -pvc-c52aa2d0-bcbc-11e7-a3ad-021c6f7dbe9d-ctrl-svc 10.110.186.186 3260/TCP,9501/TCP 25m - -``` - -## Launch Jenkins - -The Jenkins deployment YAML, creates a service of type NodePort to make Jenkins available outside the cluster. - -Get the IP Address of the node running the Jenkins pod: - -```bash -ubuntu@kubemaster:~kubectl describe pod jenkins-2748455067-85jv2 | grep Node: -Node: kubeminion-02/172.28.128.5 - -``` -Get the port number from the Jenkins service: - -```bash -ubuntu@kubemaster-01:~ kubectl describe svc jenkins-svc | grep NodePort: -NodePort: 32540/TCP - -``` - -Open the below URL in the browser: - -```bash -https://172.28.128.5:32540 - -``` - -_Note: The NodePort is dynamically allocated and may vary in a different deployment._ - -__Provide the _initialAdminPassword___ - -![Jenkins Login] - -Get the password using the below command: - -```bash -ubuntu@kubemaster:~kubectl exec -it jenkins-2748455067-85jv2 cat /var/jenkins_home/secrets/initialAdminPassword -7d7aaedb5a2a441b99117b3bb55c1eff -``` - -__Install the Suggested Plugins__ - -![Jenkins Plugins] - -__Configure the Admin User__ - -![Configure User] - -__Start Using Jenkins__ - -![Jenkins Dashboard] - -[Jenkins Login]: images/jenkins_login.png -[Jenkins Plugins]: images/jenkins_plugins.png -[Configure User]: images/jenkins_create_user.png -[Jenkins Dashboard]: images/jenkins_dashboard.png \ No newline at end of file diff --git a/k8s/demo/jenkins/images/jenkins_create_user.png b/k8s/demo/jenkins/images/jenkins_create_user.png deleted file mode 100644 index 5472199357..0000000000 Binary files a/k8s/demo/jenkins/images/jenkins_create_user.png and /dev/null differ diff --git a/k8s/demo/jenkins/images/jenkins_dashboard.png b/k8s/demo/jenkins/images/jenkins_dashboard.png deleted file mode 100644 index 7abec0d434..0000000000 Binary files a/k8s/demo/jenkins/images/jenkins_dashboard.png and /dev/null differ diff --git a/k8s/demo/jenkins/images/jenkins_login.png b/k8s/demo/jenkins/images/jenkins_login.png deleted file mode 100644 index c4143c57ee..0000000000 Binary files a/k8s/demo/jenkins/images/jenkins_login.png and /dev/null differ diff --git a/k8s/demo/jenkins/images/jenkins_plugins.png b/k8s/demo/jenkins/images/jenkins_plugins.png deleted file mode 100644 index 6636466e91..0000000000 Binary files a/k8s/demo/jenkins/images/jenkins_plugins.png and /dev/null differ diff --git a/k8s/demo/jenkins/jenkins.yml b/k8s/demo/jenkins/jenkins.yml deleted file mode 100644 index 1a861b5630..0000000000 --- a/k8s/demo/jenkins/jenkins.yml +++ /dev/null @@ -1,54 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jenkins-claim - annotations: - volume.beta.kubernetes.io/storage-class: openebs-jiva-default -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: jenkins -spec: - replicas: 1 - selector: - matchLabels: - app: jenkins-app - template: - metadata: - labels: - app: jenkins-app - spec: - securityContext: - fsGroup: 1000 - containers: - - name: jenkins - imagePullPolicy: IfNotPresent - image: jenkins/jenkins:lts - ports: - - containerPort: 8080 - volumeMounts: - - mountPath: /var/jenkins_home - name: jenkins-home - volumes: - - name: jenkins-home - persistentVolumeClaim: - claimName: jenkins-claim ---- -apiVersion: v1 -kind: Service -metadata: - name: jenkins-svc -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - app: jenkins-app - type: NodePort diff --git a/k8s/demo/jupyter/demo-jupyter-openebs.yaml b/k8s/demo/jupyter/demo-jupyter-openebs.yaml deleted file mode 100644 index e6532334b9..0000000000 --- a/k8s/demo/jupyter/demo-jupyter-openebs.yaml +++ /dev/null @@ -1,59 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: jupyter-server - namespace: default -spec: - replicas: 1 - selector: - matchLabels: - name: jupyter-server - template: - metadata: - labels: - name: jupyter-server - spec: - containers: - - name: jupyter-server - imagePullPolicy: Always - image: satyamz/docker-jupyter:v0.4 - ports: - - containerPort: 8888 - env: - - name: GIT_REPO - value: https://github.com/vharsh/plot-demo.git - volumeMounts: - - name: data-vol - mountPath: /mnt/data - volumes: - - name: data-vol - persistentVolumeClaim: - claimName: jupyter-data-vol-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jupyter-data-vol-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: jupyter-service -spec: - ports: - - name: ui - port: 8888 - nodePort: 32424 - protocol: TCP - selector: - name: jupyter-server - sessionAffinity: None - type: NodePort - diff --git a/k8s/demo/kafka/02-namespace.yml b/k8s/demo/kafka/02-namespace.yml deleted file mode 100644 index a6cf001dbb..0000000000 --- a/k8s/demo/kafka/02-namespace.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -apiVersion: v1 -kind: Namespace -metadata: - name: kafka diff --git a/k8s/demo/kafka/03-zookeeper.yaml b/k8s/demo/kafka/03-zookeeper.yaml deleted file mode 100644 index 60347618c2..0000000000 --- a/k8s/demo/kafka/03-zookeeper.yaml +++ /dev/null @@ -1,161 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: zk-headless - namespace: kafka - labels: - app: zk-headless -spec: - ports: - - port: 2888 - name: server - - port: 3888 - name: leader-election - clusterIP: None - selector: - app: zk ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: zk-config - namespace: kafka - -data: - ensemble: "zk-0;zk-1;zk-2" - jvm.heap: "2G" - tick: "2000" - init: "10" - sync: "5" - client.cnxns: "60" - snap.retain: "3" - purge.interval: "1" ---- -apiVersion: policy/v1beta1 -kind: PodDisruptionBudget -metadata: - name: zk-budget - namespace: kafka -spec: - selector: - matchLabels: - app: zk - minAvailable: 2 ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: zk - namespace: kafka -spec: - serviceName: zk-headless - replicas: 3 - selector: - matchLabels: - app: zk - template: - metadata: - labels: - app: zk - annotations: - pod.alpha.kubernetes.io/initialized: "true" - spec: - affinity: - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchExpressions: - - key: "app" - operator: In - values: - - zk-headless - topologyKey: "kubernetes.io/hostname" - containers: - - name: k8szk - imagePullPolicy: Always - image: gcr.io/google_samples/k8szk:v1 - ports: - - containerPort: 2181 - name: client - - containerPort: 2888 - name: server - - containerPort: 3888 - name: leader-election - env: - - name : ZK_ENSEMBLE - valueFrom: - configMapKeyRef: - name: zk-config - key: ensemble - - name : ZK_HEAP_SIZE - valueFrom: - configMapKeyRef: - name: zk-config - key: jvm.heap - - name : ZK_TICK_TIME - valueFrom: - configMapKeyRef: - name: zk-config - key: tick - - name : ZK_INIT_LIMIT - valueFrom: - configMapKeyRef: - name: zk-config - key: init - - name : ZK_SYNC_LIMIT - valueFrom: - configMapKeyRef: - name: zk-config - key: tick - - name : ZK_MAX_CLIENT_CNXNS - valueFrom: - configMapKeyRef: - name: zk-config - key: client.cnxns - - name: ZK_SNAP_RETAIN_COUNT - valueFrom: - configMapKeyRef: - name: zk-config - key: snap.retain - - name: ZK_PURGE_INTERVAL - valueFrom: - configMapKeyRef: - name: zk-config - key: purge.interval - - name: ZK_CLIENT_PORT - value: "2181" - - name: ZK_SERVER_PORT - value: "2888" - - name: ZK_ELECTION_PORT - value: "3888" - command: - - sh - - -c - - zkGenConfig.sh && zkServer.sh start-foreground - readinessProbe: - exec: - command: - - "zkOk.sh" - initialDelaySeconds: 15 - timeoutSeconds: 5 - livenessProbe: - exec: - command: - - "zkOk.sh" - initialDelaySeconds: 15 - timeoutSeconds: 5 - volumeMounts: - - name: datadir - mountPath: /var/lib/zookeeper - securityContext: - runAsUser: 1000 - fsGroup: 1000 - volumeClaimTemplates: - - metadata: - name: datadir - spec: - storageClassName: openebs-jiva-default - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 2G diff --git a/k8s/demo/kafka/04-kafka-config.yml b/k8s/demo/kafka/04-kafka-config.yml deleted file mode 100644 index eaacc02798..0000000000 --- a/k8s/demo/kafka/04-kafka-config.yml +++ /dev/null @@ -1,261 +0,0 @@ -kind: ConfigMap -metadata: - name: broker-config - namespace: kafka -apiVersion: v1 -data: - init.sh: |- - #!/bin/bash - set -x - - KAFKA_BROKER_ID=${HOSTNAME##*-} - sed -i "s/#init#broker.id=#init#/broker.id=$KAFKA_BROKER_ID/" /etc/kafka/server.properties - - hash kubectl 2>/dev/null || { - sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# kubectl not found in path/" /etc/kafka/server.properties - } && { - ZONE=$(kubectl get node "$NODE_NAME" -o=go-template='{{index .metadata.labels "failure-domain.beta.kubernetes.io/zone"}}') - if [ $? -ne 0 ]; then - sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# zone lookup failed, see -c init-config logs/" /etc/kafka/server.properties - elif [ "x$ZONE" == "x" ]; then - sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# zone label not found for node $NODE_NAME/" /etc/kafka/server.properties - else - sed -i "s/#init#broker.rack=#init#/broker.rack=$ZONE/" /etc/kafka/server.properties - fi - } - - server.properties: |- - # Licensed to the Apache Software Foundation (ASF) under one or more - # contributor license agreements. See the NOTICE file distributed with - # this work for additional information regarding copyright ownership. - # The ASF licenses this file to You under the Apache License, Version 2.0 - # (the "License"); you may not use this file except in compliance with - # the License. You may obtain a copy of the License at - # - # http://www.apache.org/licenses/LICENSE-2.0 - # - # Unless required by applicable law or agreed to in writing, software - # distributed under the License is distributed on an "AS IS" BASIS, - # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - # See the License for the specific language governing permissions and - # limitations under the License. - - # see kafka.server.KafkaConfig for additional details and defaults - - ############################# Server Basics ############################# - - # The id of the broker. This must be set to a unique integer for each broker. - #init#broker.id=#init# - - #init#broker.rack=#init# - - # Switch to enable topic deletion or not, default value is false - delete.topic.enable=true - - ############################# Socket Server Settings ############################# - - # The address the socket server listens on. It will get the value returned from - # java.net.InetAddress.getCanonicalHostName() if not configured. - # FORMAT: - # listeners = listener_name://host_name:port - # EXAMPLE: - # listeners = PLAINTEXT://your.host.name:9092 - #listeners=PLAINTEXT://:9092 - - # Hostname and port the broker will advertise to producers and consumers. If not set, - # it uses the value for "listeners" if configured. Otherwise, it will use the value - # returned from java.net.InetAddress.getCanonicalHostName(). - #advertised.listeners=PLAINTEXT://your.host.name:9092 - - # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details - #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - - # The number of threads that the server uses for receiving requests from the network and sending responses to the network - num.network.threads=3 - - # The number of threads that the server uses for processing requests, which may include disk I/O - num.io.threads=8 - - # The send buffer (SO_SNDBUF) used by the socket server - socket.send.buffer.bytes=102400 - - # The receive buffer (SO_RCVBUF) used by the socket server - socket.receive.buffer.bytes=102400 - - # The maximum size of a request that the socket server will accept (protection against OOM) - socket.request.max.bytes=104857600 - - - ############################# Log Basics ############################# - - # A comma separated list of directories under which to store log files - log.dirs=/tmp/kafka-logs - - # The default number of log partitions per topic. More partitions allow greater - # parallelism for consumption, but this will also result in more files across - # the brokers. - num.partitions=1 - - # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. - # This value is recommended to be increased for installations with data dirs located in RAID array. - num.recovery.threads.per.data.dir=1 - - ############################# Internal Topic Settings ############################# - # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" - # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. - offsets.topic.replication.factor=1 - transaction.state.log.replication.factor=1 - transaction.state.log.min.isr=1 - - ############################# Log Flush Policy ############################# - - # Messages are immediately written to the filesystem but by default we only fsync() to sync - # the OS cache lazily. The following configurations control the flush of data to disk. - # There are a few important trade-offs here: - # 1. Durability: Unflushed data may be lost if you are not using replication. - # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. - # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. - # The settings below allow one to configure the flush policy to flush data after a period of time or - # every N messages (or both). This can be done globally and overridden on a per-topic basis. - - # The number of messages to accept before forcing a flush of data to disk - #log.flush.interval.messages=10000 - - # The maximum amount of time a message can sit in a log before we force a flush - #log.flush.interval.ms=1000 - - ############################# Log Retention Policy ############################# - - # The following configurations control the disposal of log segments. The policy can - # be set to delete segments after a period of time, or after a given size has accumulated. - # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens - # from the end of the log. - - # The minimum age of a log file to be eligible for deletion due to age - log.retention.hours=168 - - # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining - # segments don't drop below log.retention.bytes. Functions independently of log.retention.hours. - #log.retention.bytes=1073741824 - - # The maximum size of a log segment file. When this size is reached a new log segment will be created. - log.segment.bytes=1073741824 - - # The interval at which log segments are checked to see if they can be deleted according - # to the retention policies - log.retention.check.interval.ms=300000 - - ############################# Zookeeper ############################# - - # Zookeeper connection string (see zookeeper docs for details). - # This is a comma separated host:port pairs, each corresponding to a zk - # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". - # You can also append an optional chroot string to the urls to specify the - # root directory for all kafka znodes. - zookeeper.connect=zk-0.zk-headless.default.svc.cluster.local:2181,zk-1.zk-headless.default.svc.cluster.local:2181,zk-2.zk-headless.default.svc.cluster.local:2181 - - # Timeout in ms for connecting to zookeeper - zookeeper.connection.timeout.ms=6000 - - - ############################# Group Coordinator Settings ############################# - - # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. - # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. - # The default value for this is 3 seconds. - # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. - # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. - group.initial.rebalance.delay.ms=0 - - log4j.properties: |- - # Licensed to the Apache Software Foundation (ASF) under one or more - # contributor license agreements. See the NOTICE file distributed with - # this work for additional information regarding copyright ownership. - # The ASF licenses this file to You under the Apache License, Version 2.0 - # (the "License"); you may not use this file except in compliance with - # the License. You may obtain a copy of the License at - # - # http://www.apache.org/licenses/LICENSE-2.0 - # - # Unless required by applicable law or agreed to in writing, software - # distributed under the License is distributed on an "AS IS" BASIS, - # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - # See the License for the specific language governing permissions and - # limitations under the License. - - # Unspecified loggers and loggers with additivity=true output to server.log and stdout - # Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise - log4j.rootLogger=INFO, stdout - - log4j.appender.stdout=org.apache.log4j.ConsoleAppender - log4j.appender.stdout.layout=org.apache.log4j.PatternLayout - log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n - - log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender - log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH - log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log - log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout - log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - - log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender - log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH - log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log - log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout - log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - - log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender - log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH - log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log - log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout - log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - - log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender - log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH - log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log - log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout - log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - - log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender - log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH - log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log - log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout - log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - - log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender - log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH - log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log - log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout - log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - - # Change the two lines below to adjust ZK client logging - log4j.logger.org.I0Itec.zkclient.ZkClient=INFO - log4j.logger.org.apache.zookeeper=INFO - - # Change the two lines below to adjust the general broker logging level (output to server.log and stdout) - log4j.logger.kafka=INFO - log4j.logger.org.apache.kafka=INFO - - # Change to DEBUG or TRACE to enable request logging - log4j.logger.kafka.request.logger=WARN, requestAppender - log4j.additivity.kafka.request.logger=false - - # Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output - # related to the handling of requests - #log4j.logger.kafka.network.Processor=TRACE, requestAppender - #log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender - #log4j.additivity.kafka.server.KafkaApis=false - log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender - log4j.additivity.kafka.network.RequestChannel$=false - - log4j.logger.kafka.controller=TRACE, controllerAppender - log4j.additivity.kafka.controller=false - - log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender - log4j.additivity.kafka.log.LogCleaner=false - - log4j.logger.state.change.logger=TRACE, stateChangeAppender - log4j.additivity.state.change.logger=false - - # Change to DEBUG to enable audit log for the authorizer - log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender - log4j.additivity.kafka.authorizer.logger=false diff --git a/k8s/demo/kafka/05-service-kafka.yml b/k8s/demo/kafka/05-service-kafka.yml deleted file mode 100644 index 30ef68d519..0000000000 --- a/k8s/demo/kafka/05-service-kafka.yml +++ /dev/null @@ -1,13 +0,0 @@ -# A headless service to create DNS records ---- -apiVersion: v1 -kind: Service -metadata: - name: broker - namespace: kafka -spec: - ports: - - port: 9092 - clusterIP: None - selector: - app: kafka diff --git a/k8s/demo/kafka/06-kafka-statefulset.yml b/k8s/demo/kafka/06-kafka-statefulset.yml deleted file mode 100644 index 9e8484c139..0000000000 --- a/k8s/demo/kafka/06-kafka-statefulset.yml +++ /dev/null @@ -1,77 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: kafka - namespace: kafka -spec: - serviceName: "broker" - replicas: 3 - selector: - matchLabels: - app: kafka - template: - metadata: - labels: - app: kafka - annotations: - spec: - terminationGracePeriodSeconds: 30 - initContainers: - - name: init-config - image: solsson/kafka-initutils@sha256:c275d681019a0d8f01295dbd4a5bae3cfa945c8d0f7f685ae1f00f2579f08c7d - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - command: ['/bin/bash', '/etc/kafka/init.sh'] - volumeMounts: - - name: config - mountPath: /etc/kafka - containers: - - name: broker - image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce - env: - - name: KAFKA_LOG4J_OPTS - value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties - ports: - - containerPort: 9092 - command: - - ./bin/kafka-server-start.sh - - /etc/kafka/server.properties - - --override - - zookeeper.connect=zk-0.zk-headless.kafka.svc.cluster.local:2181,zk-1.zk-headless.kafka.svc.cluster.local:2181,zk-2.zk-headless.kafka.svc.cluster.local:2181 - - --override - - log.retention.hours=-1 - - --override - - log.dirs=/var/lib/kafka/data/topics - - --override - - auto.create.topics.enable=false - resources: - requests: - cpu: 100m - memory: 512Mi - readinessProbe: - exec: - command: - - /bin/sh - - -c - - 'echo "" | nc -w 1 127.0.0.1 9092' - volumeMounts: - - name: config - mountPath: /etc/kafka - - name: data - mountPath: /var/lib/kafka/data - volumes: - - name: config - configMap: - name: broker-config - volumeClaimTemplates: - - metadata: - name: data - spec: - storageClassName: openebs-jiva-default - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 10G diff --git a/k8s/demo/kafka/README.md b/k8s/demo/kafka/README.md deleted file mode 100644 index dc60399b59..0000000000 --- a/k8s/demo/kafka/README.md +++ /dev/null @@ -1,70 +0,0 @@ -Apply the k8s podspecs mentioned in the folder as per the numbering prefixed on the YAMLs. - -`kubectl apply -f ` - -This will create a 3 node zookeeper ensemble and a 3 node Kafka cluster which uses OpenEBS volumes. - -## Verify Zookeeper -Verify the Zookeeper ensemle. - -``` -kubectl exec zk-0 -- /opt/zookeeper/bin/zkCli.sh create /foo bar - -WATCHER:: -WatchedEvent state:SyncConnected type:None path:null -Created /foo - -kubectl exec zk-2 -- /opt/zookeeper/bin/zkCli.sh get /foo - -WATCHER:: -WatchedEvent state:SyncConnected type:None path:null -cZxid = 0x10000004d -bar -ctime = Tue Aug 08 14:18:11 UTC 2017 -mZxid = 0x10000004d -mtime = Tue Aug 08 14:18:11 UTC 2017 -pZxid = 0x10000004d -cversion = 0 -dataVersion = 0 -aclVersion = 0 -ephemeralOwner = 0x0 -dataLength = 3 -numChildren = 0 -``` - -## Verify Kafka pods. - -Verify kafka cluster running on your kubernetes cluster by sending messages to it - -``` -kubectl exec -n kafka -it kafka-0 -- bash - -bin/kafka-topics.sh --zookeeper zk-headless.kafka.svc.cluster.local:2181 --create --if-not-exists --topic openEBS.t --partitions 3 --replication-factor 3 - -Created topic "openEBS.t". - -bin/kafka-topics.sh --list --zookeeper zk-headless.kafka.svc.cluster.local:2181 -openEBS.t - -bin/kafka-topics.sh --describe --zookeeper zk-headless.kafka.svc.cluster.local:2181 --topic openEBS.t - -Topic:openEBS.t PartitionCount:3 ReplicationFactor:3 Configs: -Topic: openEBS.t Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 -Topic: openEBS.t Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0 -Topic: openEBS.t Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1 - -bin/kafka-console-producer.sh --broker-list kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092,kafka-2.broker.kafka.svc.cluster.local:9092 --topic px-kafka-topic - ->Hello Kubernetes! ->This is kafka saying Hello! -``` - -Consume messages sent earlier. -``` -kubectl exec -n kafka -it kafka-1 -- bash - -bin/kafka-console-consumer.sh --zookeeper zk-headless.kafka.svc.cluster.local:2181 —topic px-kafka-topic --from-beginning - -Hello Kubernetes! -This is kafka saying Hello! -``` diff --git a/k8s/demo/minio/README.md b/k8s/demo/minio/README.md deleted file mode 100644 index fd99799f2e..0000000000 --- a/k8s/demo/minio/README.md +++ /dev/null @@ -1,59 +0,0 @@ - ## STEPS TO SETUP MINIO SERVER WITH OPENEBS STORAGE - -- Refer to https://docs.minio.io/ to learn about minio - -- Verify that the OpenEBS operator (maya-apiserver, openebs-proviosioner) is running on the - cluster. - -- Apply the minio deployment spec - - ``` - test@Master:~/minio$ kubectl apply -f minio.yaml - deployment "minio-deployment" created - persistentvolumeclaim "minio-pv-claim" created - service "minio-service" created - test@Master:~/minio$ - ``` - -- Verify that the minio deployment & service are running successfully with OpenEBS PV - - ``` - test@Master:~/minio$ kubectl get pods - NAME READY STATUS RESTARTS AGE - maya-apiserver-5d5944b47c-fbnjh 1/1 Running 0 4d - minio-deployment-5c44fc754-tqmgb 1/1 Running 0 51s - openebs-provisioner-6b8df9746c-gj4gk 1/1 Running 0 4d - pvc-a99c0535-30ac-11e8-9309-000c298ff5fc-ctrl-6dd84dd9c7-xnmqc 2/2 Running 0 51s - pvc-a99c0535-30ac-11e8-9309-000c298ff5fc-rep-844cddb54-cxndb 1/1 Running 0 51s - pvc-a99c0535-30ac-11e8-9309-000c298ff5fc-rep-844cddb54-qmpj4 1/1 Running 0 51s - - test@Master:~/minio$ kubectl get svc - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - kubernetes ClusterIP 10.96.0.1 443/TCP 38d - maya-apiserver-service ClusterIP 10.103.166.8 5656/TCP 4d - minio-service NodePort 10.109.28.237 9000:32701/TCP 8s - pvc-a99c0535-30ac-11e8-9309-000c298ff5fc-ctrl-svc ClusterIP 10.108.61.141 3260/TCP,9501/TCP 8s - ``` - -- Use the minio UI to upload some data on the OpenEBS PV and verify that the operation is successful - - - On a browser, navigate to the IP address of any of the nodes in the cluster, at the exposed port - (32701 in the example above) and login using the default credentials *Access Key: minio, Secret key: minio123* - - ![minio login screen](images/minio-login-screen.jpg) - - - Minio has similar functionality to S3: file uploads, creating buckets, and storing other data. - - In this step, use the icon at *bottom-right* of the screen to create a bucket & perform a file-upload - - ![minio create bucket](images/minio-create-bucket.jpg) - - ![minio upload file](images/minio-upload-file.jpg) - - - Verify that the file-upload is successful - - ![minio stored file](images/minio-stored-file.jpg) - - - - diff --git a/k8s/demo/minio/images/minio-create-bucket.jpg b/k8s/demo/minio/images/minio-create-bucket.jpg deleted file mode 100644 index 0db7a4646a..0000000000 Binary files a/k8s/demo/minio/images/minio-create-bucket.jpg and /dev/null differ diff --git a/k8s/demo/minio/images/minio-login-screen.jpg b/k8s/demo/minio/images/minio-login-screen.jpg deleted file mode 100644 index d2d3e5427d..0000000000 Binary files a/k8s/demo/minio/images/minio-login-screen.jpg and /dev/null differ diff --git a/k8s/demo/minio/images/minio-stored-file.jpg b/k8s/demo/minio/images/minio-stored-file.jpg deleted file mode 100644 index 31c02f3855..0000000000 Binary files a/k8s/demo/minio/images/minio-stored-file.jpg and /dev/null differ diff --git a/k8s/demo/minio/images/minio-upload-file.jpg b/k8s/demo/minio/images/minio-upload-file.jpg deleted file mode 100644 index 0bce8a51e6..0000000000 Binary files a/k8s/demo/minio/images/minio-upload-file.jpg and /dev/null differ diff --git a/k8s/demo/minio/minio-distributed-cstor.yaml b/k8s/demo/minio/minio-distributed-cstor.yaml deleted file mode 100644 index ca230792d5..0000000000 --- a/k8s/demo/minio/minio-distributed-cstor.yaml +++ /dev/null @@ -1,91 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: minio - labels: - app: minio -spec: - clusterIP: None - ports: - - port: 9000 - name: minio - selector: - app: minio ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: minio -spec: - selector: - matchLabels: - openebs.io/replica-anti-affinity: minio - openebs.io/sts-target-affinity: minio - app: minio - serviceName: minio - replicas: 4 - template: - metadata: - labels: - app: minio - openebs.io/replica-anti-affinity: minio - openebs.io/sts-target-affinity: minio - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - minio - topologyKey: kubernetes.io/hostname - containers: - - name: minio - env: - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - image: minio/minio - args: - - server - - http://minio-{0...3}.minio.default.svc.cluster.local/data - ports: - - containerPort: 9000 - # These volume mounts are persistent. Each pod in the PetSet - # gets a volume mounted based on this field. - volumeMounts: - - name: data - mountPath: /data - # These are converted to volume claims by the controller - # and mounted at the paths mentioned above. - volumeClaimTemplates: - - metadata: - name: data - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 20Gi - # Uncomment and add storageClass specific to your requirements below. Read more https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 - storageClassName: openebs-sc-rep1 ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: - type: NodePort - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio diff --git a/k8s/demo/minio/minio-distributed-localpv-device-default.yaml b/k8s/demo/minio/minio-distributed-localpv-device-default.yaml deleted file mode 100644 index 9b348ddf21..0000000000 --- a/k8s/demo/minio/minio-distributed-localpv-device-default.yaml +++ /dev/null @@ -1,87 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: minio - labels: - app: minio -spec: - clusterIP: None - ports: - - port: 9000 - name: minio - selector: - app: minio ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: minio -spec: - selector: - matchLabels: - app: minio - serviceName: minio - replicas: 4 - template: - metadata: - labels: - app: minio - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - minio - topologyKey: kubernetes.io/hostname - containers: - - name: minio - env: - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - image: minio/minio - args: - - server - - http://minio-{0...3}.minio.default.svc.cluster.local/data - ports: - - containerPort: 9000 - # These volume mounts are persistent. Each pod in the PetSet - # gets a volume mounted based on this field. - volumeMounts: - - name: data - mountPath: /data - # These are converted to volume claims by the controller - # and mounted at the paths mentioned above. - volumeClaimTemplates: - - metadata: - name: data - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 20Gi - # Uncomment and add storageClass specific to your requirements below. Read more https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 - storageClassName: openebs-device ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: - type: NodePort - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio diff --git a/k8s/demo/minio/minio-distributed-localpv-hostpath-default.yaml b/k8s/demo/minio/minio-distributed-localpv-hostpath-default.yaml deleted file mode 100644 index d10dcce088..0000000000 --- a/k8s/demo/minio/minio-distributed-localpv-hostpath-default.yaml +++ /dev/null @@ -1,76 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: minio - labels: - app: minio -spec: - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio - type: NodePort ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: minio -spec: - selector: - matchLabels: - app: minio - serviceName: minio - replicas: 4 - template: - metadata: - labels: - app: minio - spec: - affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - minio - topologyKey: kubernetes.io/hostname - containers: - - name: minio - env: - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - image: minio/minio - args: - - server - - http://minio-{0...3}.minio.default.svc.cluster.local/data - ports: - - containerPort: 9000 - # These volume mounts are persistent. Each pod in the PetSet - # gets a volume mounted based on this field. - volumeMounts: - - name: data - mountPath: /data - # These are converted to volume claims by the controller - # and mounted at the paths mentioned above. - volumeClaimTemplates: - - metadata: - name: data - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 20Gi - # Uncomment and add storageClass specific to your requirements below. Read more https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 - storageClassName: openebs-hostpath ---- diff --git a/k8s/demo/minio/minio-standalone-cstor.yaml b/k8s/demo/minio/minio-standalone-cstor.yaml deleted file mode 100644 index 68738c93cd..0000000000 --- a/k8s/demo/minio/minio-standalone-cstor.yaml +++ /dev/null @@ -1,75 +0,0 @@ -# For k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 -apiVersion: apps/v1 -kind: Deployment -metadata: - # This name uniquely identifies the Deployment - name: minio-deployment -spec: - selector: - matchLabels: - app: minio - strategy: - type: Recreate - template: - metadata: - labels: - # Label is used as selector in the service. - app: minio - spec: - # Refer to the PVC - volumes: - - name: storage - persistentVolumeClaim: - # Name of the PVC created earlier - claimName: minio-pv-claim - containers: - - name: minio - # Pulls the default Minio image from Docker Hub - image: minio/minio:latest - args: - - server - - /storage - env: - # Minio access key and secret key - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - ports: - - containerPort: 9000 - hostPort: 9000 - # Mount the volume into the pod - volumeMounts: - - name: storage # must match the volume name, above - mountPath: "/storage" ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: minio-pv-claim - labels: - app: minio-storage-claim -spec: - storageClassName: openebs-sc-rep3 - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: -# type: LoadBalancer - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio - sessionAffinity: None - type: NodePort diff --git a/k8s/demo/minio/minio-standalone-jiva-default.yaml b/k8s/demo/minio/minio-standalone-jiva-default.yaml deleted file mode 100644 index dd964cdf1e..0000000000 --- a/k8s/demo/minio/minio-standalone-jiva-default.yaml +++ /dev/null @@ -1,75 +0,0 @@ -# For k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 -apiVersion: apps/v1 -kind: Deployment -metadata: - # This name uniquely identifies the Deployment - name: minio-deployment -spec: - selector: - matchLabels: - app: minio - strategy: - type: Recreate - template: - metadata: - labels: - # Label is used as selector in the service. - app: minio - spec: - # Refer to the PVC - volumes: - - name: storage - persistentVolumeClaim: - # Name of the PVC created earlier - claimName: minio-pv-claim - containers: - - name: minio - # Pulls the default Minio image from Docker Hub - image: minio/minio:latest - args: - - server - - /storage - env: - # Minio access key and secret key - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - ports: - - containerPort: 9000 - hostPort: 9000 - # Mount the volume into the pod - volumeMounts: - - name: storage # must match the volume name, above - mountPath: "/storage" ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: minio-pv-claim - labels: - app: minio-storage-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: -# type: LoadBalancer - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio - sessionAffinity: None - type: NodePort diff --git a/k8s/demo/minio/minio-standalone-jiva-storagepool.yaml b/k8s/demo/minio/minio-standalone-jiva-storagepool.yaml deleted file mode 100644 index 3a17118082..0000000000 --- a/k8s/demo/minio/minio-standalone-jiva-storagepool.yaml +++ /dev/null @@ -1,75 +0,0 @@ -# For k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 -apiVersion: apps/v1 -kind: Deployment -metadata: - # This name uniquely identifies the Deployment - name: minio-deployment -spec: - selector: - matchLabels: - app: minio - strategy: - type: Recreate - template: - metadata: - labels: - # Label is used as selector in the service. - app: minio - spec: - # Refer to the PVC - volumes: - - name: storage - persistentVolumeClaim: - # Name of the PVC created earlier - claimName: minio-pv-claim - containers: - - name: minio - # Pulls the default Minio image from Docker Hub - image: minio/minio:latest - args: - - server - - /storage - env: - # Minio access key and secret key - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - ports: - - containerPort: 9000 - hostPort: 9000 - # Mount the volume into the pod - volumeMounts: - - name: storage # must match the volume name, above - mountPath: "/storage" ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: minio-pv-claim - labels: - app: minio-storage-claim -spec: - storageClassName: openebs-jiva-3rep - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: -# type: LoadBalancer - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio - sessionAffinity: None - type: NodePort diff --git a/k8s/demo/minio/minio-standalone-localpv-device.yaml b/k8s/demo/minio/minio-standalone-localpv-device.yaml deleted file mode 100644 index 8bb483e5d3..0000000000 --- a/k8s/demo/minio/minio-standalone-localpv-device.yaml +++ /dev/null @@ -1,75 +0,0 @@ -# For k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 -apiVersion: apps/v1 -kind: Deployment -metadata: - # This name uniquely identifies the Deployment - name: minio-deployment -spec: - selector: - matchLabels: - app: minio - strategy: - type: Recreate - template: - metadata: - labels: - # Label is used as selector in the service. - app: minio - spec: - # Refer to the PVC - volumes: - - name: storage - persistentVolumeClaim: - # Name of the PVC created earlier - claimName: minio-pv-claim - containers: - - name: minio - # Pulls the default Minio image from Docker Hub - image: minio/minio:latest - args: - - server - - /storage - env: - # Minio access key and secret key - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - ports: - - containerPort: 9000 - hostPort: 9000 - # Mount the volume into the pod - volumeMounts: - - name: storage # must match the volume name, above - mountPath: "/storage" ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: minio-pv-claim - labels: - app: minio-storage-claim -spec: - storageClassName: openebs-device - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: -# type: LoadBalancer - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio - sessionAffinity: None - type: NodePort diff --git a/k8s/demo/minio/minio-standalone-localpv-hostpath-default.yaml b/k8s/demo/minio/minio-standalone-localpv-hostpath-default.yaml deleted file mode 100644 index 3b2c111a54..0000000000 --- a/k8s/demo/minio/minio-standalone-localpv-hostpath-default.yaml +++ /dev/null @@ -1,75 +0,0 @@ -# For k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 -apiVersion: apps/v1 -kind: Deployment -metadata: - # This name uniquely identifies the Deployment - name: minio-deployment -spec: - selector: - matchLabels: - app: minio - strategy: - type: Recreate - template: - metadata: - labels: - # Label is used as selector in the service. - app: minio - spec: - # Refer to the PVC - volumes: - - name: storage - persistentVolumeClaim: - # Name of the PVC created earlier - claimName: minio-pv-claim - containers: - - name: minio - # Pulls the default Minio image from Docker Hub - image: minio/minio:latest - args: - - server - - /storage - env: - # Minio access key and secret key - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - - name: MINIO_PROMETHEUS_AUTH_TYPE - value: "public" - ports: - - containerPort: 9000 - hostPort: 9000 - # Mount the volume into the pod - volumeMounts: - - name: storage # must match the volume name, above - mountPath: "/storage" ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: minio-pv-claim - labels: - app: minio-storage-claim -spec: - storageClassName: openebs-hostpath - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: -# type: LoadBalancer - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio - sessionAffinity: None - type: NodePort diff --git a/k8s/demo/minio/minio.yaml b/k8s/demo/minio/minio.yaml deleted file mode 100644 index f2a1f90a0a..0000000000 --- a/k8s/demo/minio/minio.yaml +++ /dev/null @@ -1,73 +0,0 @@ -# For k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 -apiVersion: apps/v1 -kind: Deployment -metadata: - # This name uniquely identifies the Deployment - name: minio-deployment -spec: - selector: - matchLabels: - app: minio - strategy: - type: Recreate - template: - metadata: - labels: - # Label is used as selector in the service. - app: minio - spec: - # Refer to the PVC - volumes: - - name: storage - persistentVolumeClaim: - # Name of the PVC created earlier - claimName: minio-pv-claim - containers: - - name: minio - # Pulls the default Minio image from Docker Hub - image: minio/minio:latest - args: - - server - - /storage - env: - # Minio access key and secret key - - name: MINIO_ACCESS_KEY - value: "minio" - - name: MINIO_SECRET_KEY - value: "minio123" - ports: - - containerPort: 9000 - hostPort: 9000 - # Mount the volume into the pod - volumeMounts: - - name: storage # must match the volume name, above - mountPath: "/storage" ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: minio-pv-claim - labels: - app: minio-storage-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10G ---- -apiVersion: v1 -kind: Service -metadata: - name: minio-service -spec: -# type: LoadBalancer - ports: - - port: 9000 - nodePort: 32701 - protocol: TCP - selector: - app: minio - sessionAffinity: None - type: NodePort diff --git a/k8s/demo/mongodb/README.md b/k8s/demo/mongodb/README.md deleted file mode 100644 index 434c37a1cf..0000000000 --- a/k8s/demo/mongodb/README.md +++ /dev/null @@ -1,412 +0,0 @@ -# Running mongodb statefulset on OpenEBS - -This tutorial provides detailed instructions to perform the following tasks : - -- Run a mongodb statefulset on OpenEBS storage in a Kubernetes cluster -- Generate standard OLTP load on mongodb using a custom sysbench tool -- Test the data replication across the mongodb instances. - -## Prerequisites - -Prerequisites include the following: - -- A fully configured Kubernetes cluster (versions 1.9.7+ have been tested) - - Note: _OpenEBS recommends using at least a 3-node cluster_ - - ``` - test@Master:~$ kubectl get nodes - NAME STATUS ROLES AGE VERSION - gke-kmova-helm-default-pool-6b1e777c-8gdf Ready 17h v1.9.7-gke.6 - gke-kmova-helm-default-pool-6b1e777c-fwgp Ready 17h v1.9.7-gke.6 - gke-kmova-helm-default-pool-6b1e777c-m07h Ready 17h v1.9.7-gke.6 - ``` - -- Sufficient resources on the nodes to host the OpenEBS storage pods and application pods. This includes sufficient disk space, - as, in this example, physical storage for the volume containers will be carved out from the local storage - -- iSCSI support on the nodes. This is required to consume the iSCSI target exposed by the OpenEBS volume container. -In ubuntu, the iSCSI initiator can be installed using the following procedure : - - ``` - sudo apt-get update - sudo apt-get install open-iscsi - sudo service open-iscsi restart - ``` - Verify that iSCSI is configured: - - ``` - sudo cat /etc/iscsi/initiatorname.iscsi - sudo service open-iscsi status - ``` -- Install the following dependent packages to run mongodb-integrated sysbench I/O tool on any one of the Kubernetes nodes - - ``` - sudo apt-get install : - - make - libsasl2-dev - libssl-dev - libmongoc-dev - libbson-dev - ``` - -## Step-1: Run OpenEBS Operator - -Download the latest OpenEBS operator files and sample mongodb statefulset specification yaml on the Kubernetes master -from the OpenEBS git repository. - -``` -git clone https://github.com/openebs/openebs.git -cd openebs/k8s -``` - -Apply the openebs-operator.yml on the Kubernetes cluster. This creates the maya api-server and OpenEBS provisioner deployments. - -``` -kubectl apply -f openebs-operator.yaml -``` - -Check whether the deployments are running successfully. - -``` -test@Master:~$ kubectl get pods -n openebs -``` - -## Step-2: Deploy the mongo-statefulset with OpenEBS storage - -Use OpenEBS as persistent storage for the mongodb statefulset by selecting an OpenEBS storage class in the persistent volume claim. -A sample mongodb statefulset yaml (with container attributes and pvc details) is available in the openebs git repository. - -The number of replicas in the statefulset can be modified as required. This example makes use of 1 replica. The replica count -can be edited in the statefulset specification : - -``` ---- -apiVersion: apps/v1beta1 -kind: StatefulSet -metadata: - name: mongo -spec: - serviceName: "mongo" - replicas: 2 - template: - metadata: - labels: - role: mongo - environment: test -. -. -``` - -Apply the mongo-statefulset yaml : - -``` -test@Master:~$ kubectl apply -f mongo-statefulset.yml -service "mongo" created -statefulset "mongo" created -``` - -Verify that the mongodb replicas, the mongo headless service and openebs persistent volumes comprising the controller and replica pods -are successfully deployed and are in "Running" state. - -``` -test@Master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -mongo-0 2/2 Running 0 2m -mongo-1 2/2 Running 0 2m -mongo-2 2/2 Running 0 1m -openebs-provisioner-1149663462-5pdcq 1/1 Running 0 8m -pvc-0d39583c-bad7-11e7-869d-000c298ff5fc-ctrl-4109100951-v2ndc 1/1 Running 0 2m -pvc-0d39583c-bad7-11e7-869d-000c298ff5fc-rep-1655873671-50f8z 1/1 Running 0 2m -pvc-21da76b6-bad7-11e7-869d-000c298ff5fc-ctrl-2618026111-z5hzt 1/1 Running 0 2m -pvc-21da76b6-bad7-11e7-869d-000c298ff5fc-rep-187343257-9w46n 1/1 Running 0 2m -pvc-3a9ca1ec-bad7-11e7-869d-000c298ff5fc-ctrl-2347166037-vsc2t 1/1 Running 0 1m -pvc-3a9ca1ec-bad7-11e7-869d-000c298ff5fc-rep-849715916-3w1c7 1/1 Running 0 1m - -test@Master:~$ kubectl get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -mongo None 27017/TCP 3m -pvc-0d39583c-bad7-11e7-869d-000c298ff5fc-ctrl-svc 10.105.60.71 3260/TCP,9501/TCP 3m -pvc-21da76b6-bad7-11e7-869d-000c298ff5fc-ctrl-svc 10.105.178.143 3260/TCP,9501/TCP 2m -pvc-3a9ca1ec-bad7-11e7-869d-000c298ff5fc-ctrl-svc 10.110.104.42 3260/TCP,9501/TCP 1m - -``` - -Note : It may take some time for pods to start as the images need to be pulled and instantiated. This is also dependent -on the network speed. - -## Step-3 Generate load on the mongodb instance - -In this example, we will be using a custom-built sysbench framework integrated with support for OLTP tests mongodb via lua scripts. -Sysbench is a multi-purpose benchmarking tool capable of running DB benchmarks as well as regular raw/file device I/O. - -### Sysbench Installation Steps : - -- Download the appropriate branch of Percona-Lab's sysbench fork with support for mongodb integration on the Kubernetes nodes into -which the sysbench dependencies are installed (refer the prerequisites) - - ``` - git clone -b dev-mongodb-support-1.0 https://github.com/Percona-Lab/sysbench.git - ``` - -- Enter the sysbench local repository and perform the following commands in the given order : - - ``` - cd sysbench - - ./autogen.sh - ./configure - make - ``` - Note : In case of errors where some header files belonging to the libbson/libmongoc packages are not found, update the include - path (One workaround for this is to place all header files inside libbson-1.0 and libmongoc-1.0 into /usr/include) - -### Execute the sysbench benchmark - -- Identify the primary mongodb instance name OR its IP (In the current statefulset specification YAML, "mongo-0" is always - configured as the primary instance that takes client I/O) - -- Trigger the sysbench command using the following command to : - - - prepare the database, add the collections - - Perform the benchmark run - - Note : Replace the mongo-url param based on the appropriate IP which can be obtained by ```kubectl describe pod mongo-0 | grep IP``` - - ``` - test@Host02:~/sysbench$ ./sysbench/sysbench --mongo-write-concern=1 --mongo-url="mongodb://10.44.0.3" --mongo-database-name=sbtest --test=./sysbench/tests/mongodb/oltp.lua --oltp_table_size=100 --oltp_tables_count=10 --num-threads=10 --rand-type=pareto --report-interval=10 --max-requests=0 --max-time=600 --oltp-point-selects=10 --oltp-simple-ranges=1 --oltp-sum-ranges=1 --oltp-order-ranges=1 --oltp-distinct-ranges=1 --oltp-index-updates=1 --oltp-non-index-updates=1 --oltp-inserts=1 run - ``` - The parameters used for the sysbench can be modified based on system capability and storage definition to obtain realistic benchmark figures. - - The benchmark output displayed is similar to the following : - - ``` - sysbench 1.0: multi-threaded system evaluation benchmark - - Running the test with following options: - Number of threads: 10 - Report intermediate results every 10 second(s) - Initializing random number generator from current time - - - Initializing worker threads... - - setting write concern to 1 - Threads started! - - [ 10s] threads: 10, tps: 56.60, reads: 171.50, writes: 170.40, response time: 316.14ms (95%), errors: 0.00, reconnects: 0.00 - [ 20s] threads: 10, tps: 74.70, reads: 222.90, writes: 223.50, response time: 196.30ms (95%), errors: 0.00, reconnects: 0.00 - [ 30s] threads: 10, tps: 76.00, reads: 227.70, writes: 228.00, response time: 196.71ms (95%), errors: 0.00, reconnects: 0.00 - [ 40s] threads: 10, tps: 79.60, reads: 239.70, writes: 238.80, response time: 329.08ms (95%), errors: 0.00, reconnects: 0.00 - : - : - OLTP test statistics: - queries performed: - read: 154189 - write: 154122 - other: 51374 - total: 359685 - transactions: 51374 (85.61 per sec.) - read/write requests: 308311 (513.79 per sec.) - other operations: 51374 (85.61 per sec.) - ignored errors: 0 (0.00 per sec.) - reconnects: 0 (0.00 per sec.) - - General statistics: - total time: 600.0703s - total number of events: 51374 - total time taken by event execution: 6000.1853s - response time: - min: 26.11ms - avg: 116.79ms - max: 2388.03ms - approx. 95 percentile: 224.00ms - - Threads fairness: - events (avg/stddev): 5137.4000/21.50 - execution time (avg/stddev): 600.0185/0.02 - ``` -- While the benchmark is in progress, performance and capacity usage statistics on the OpenEBS storage volume can be viewed via mayactl - commands that must be executed on the maya-apiserver pod. - - Take an interactive bash session into the maya-apiserver pod container - - ``` - test@Master:~$ kubectl exec -it maya-apiserver-1089964587-x5q15 /bin/bash - root@maya-apiserver-1089964587-x5q15:/# - ``` - - Obtain the list of OpenEBS persistent volumes created by the mongodb statefulset application YAML. - - ``` - root@maya-apiserver-1089964587-x5q15:/# maya volume list - Name Status - pvc-0d39583c-bad7-11e7-869d-000c298ff5fc Running - pvc-21da76b6-bad7-11e7-869d-000c298ff5fc Running - ``` - - View usage and I/O metrics for the required volume via the stats command - - ``` - root@maya-apiserver-1089964587-x5q15:/# maya volume stats pvc-0d39583c-bad7-11e7-869d-000c298ff5fc - IQN : iqn.2016-09.com.openebs.jiva:pvc-0d39583c-bad7-11e7-869d-000c298ff5fc - Volume : pvc-0d39583c-bad7-11e7-869d-000c298ff5fc - Portal : 10.105.60.71:3260 - Size : 5G - - Replica| Status| DataUpdateIndex| - | | | - 10.44.0.2| Online| 4341| - 10.36.0.3| Online| 4340| - - ----------- Performance Stats ----------- - - r/s| w/s| r(MB/s)| w(MB/s)| rLat(ms)| wLat(ms)| - 0| 14| 0.000| 14.000| 0.000| 71.325| - - ------------ Capacity Stats ------------- - - Logical(GB)| Used(GB)| - 0.214| 0.205| - ``` - - ### Verify mongodb replication - - - Login into the primary instance of the mongodb statefulset via the in-built mongo shell and verify creation of the - "sbtest" test database created by sysbench in the previous steps. - - ``` - test@Master:~$ kubectl exec -it mongo-0 /bin/bash - root@mongo-0:/# mongo - - MongoDB shell version v3.4.9 - connecting to: mongodb://127.0.0.1:27017 - MongoDB server version: 3.4.9 - : - rs0:PRIMARY> show dbs - admin 0.000GB - local 0.006GB - sbtest 0.001GB - ``` - - Run the replication status command on the master/primary instance of the statefulset. In the output, verify that the values - (timestamps) for the "optimeDate" on both members are *almost* the same - - ``` - rs0:PRIMARY> rs.status() - { - "set" : "rs0", - "date" : ISODate("2017-10-23T07:26:36.679Z"), - "myState" : 1, - "term" : NumberLong(1), - "heartbeatIntervalMillis" : NumberLong(2000), - "optimes" : { - "lastCommittedOpTime" : { - "ts" : Timestamp(1508743595, 51), - "t" : NumberLong(1) - }, - "appliedOpTime" : { - "ts" : Timestamp(1508743596, 40), - "t" : NumberLong(1) - }, - "durableOpTime" : { - "ts" : Timestamp(1508743595, 71), - "t" : NumberLong(1) - } - }, - "members" : [ - { - "_id" : 0, - "name" : "10.44.0.3:27017", - "health" : 1, - "state" : 1, - "stateStr" : "PRIMARY", - "uptime" : 243903, - "optime" : { - "ts" : Timestamp(1508743596, 40), - "t" : NumberLong(1) - }, - "optimeDate" : ISODate("2017-10-23T07:26:36Z"), - "electionTime" : Timestamp(1508499738, 2), - "electionDate" : ISODate("2017-10-20T11:42:18Z"), - "configVersion" : 5, - "self" : true - }, - { - "_id" : 1, - "name" : "10.36.0.6:27017", - "health" : 1, - "state" : 2, - "stateStr" : "SECONDARY", - "uptime" : 243756, - "optime" : { - "ts" : Timestamp(1508743595, 51), - "t" : NumberLong(1) - }, - "optimeDurable" : { - "ts" : Timestamp(1508743595, 34), - "t" : NumberLong(1) - }, - "optimeDate" : ISODate("2017-10-23T07:26:35Z"), - "optimeDurableDate" : ISODate("2017-10-23T07:26:35Z"), - "lastHeartbeat" : ISODate("2017-10-23T07:26:35.534Z"), - "lastHeartbeatRecv" : ISODate("2017-10-23T07:26:34.894Z"), - "pingMs" : NumberLong(6), - "syncingTo" : "10.44.0.3:27017", - "configVersion" : 5 - }, - { - "_id" : 2, - "name" : "10.44.0.7:27017", - "health" : 1, - "state" : 2, - "stateStr" : "SECONDARY", - "uptime" : 243700, - "optime" : { - "ts" : Timestamp(1508743595, 104), - "t" : NumberLong(1) - }, - "optimeDurable" : { - "ts" : Timestamp(1508743595, 34), - "t" : NumberLong(1) - }, - "optimeDate" : ISODate("2017-10-23T07:26:35Z"), - "optimeDurableDate" : ISODate("2017-10-23T07:26:35Z"), - "lastHeartbeat" : ISODate("2017-10-23T07:26:35.949Z"), - "lastHeartbeatRecv" : ISODate("2017-10-23T07:26:35.949Z"), - "pingMs" : NumberLong(0), - "syncingTo" : "10.44.0.3:27017", - "configVersion" : 5 - } - ], - "ok" : 1 - } - ``` - - You could further confirm the presence of the DB with the same size on the secondary instances (for example, mongo-1). - - Note : By default, the dbs cannot be viewed on the secondary instance via the show dbs command, unless we set the slave context. - - ``` - rs0:SECONDARY> rs.slaveOk() - - rs0:SECONDARY> show dbs - admin 0.000GB - local 0.005GB - sbtest 0.001GB - ``` - - - The time lag between the mongodb instances can be found via the following command, which can be executed on either instance. - - ``` - rs0:SECONDARY> rs.printSlaveReplicationInfo() - source: 10.36.0.6:27017 - syncedTo: Mon Oct 23 2017 07:28:27 GMT+0000 (UTC) - 0 secs (0 hrs) behind the primary - source: 10.44.0.7:27017 - syncedTo: Mon Oct 23 2017 07:28:27 GMT+0000 (UTC) - 0 secs (0 hrs) behind the primary - ``` - - - - diff --git a/k8s/demo/mongodb/demo-mongo-cstor-taa.yaml b/k8s/demo/mongodb/demo-mongo-cstor-taa.yaml deleted file mode 100644 index 68260a93b2..0000000000 --- a/k8s/demo/mongodb/demo-mongo-cstor-taa.yaml +++ /dev/null @@ -1,69 +0,0 @@ -# Headless service for stable DNS entries of StatefulSet members. -apiVersion: v1 -kind: Service -metadata: - name: mongo - labels: - app: mongo -spec: - ports: - - port: 27017 - targetPort: 27017 - clusterIP: None - selector: - role: mongo ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongo - labels: - app: mongo - openebs.io/sts-target-affinity: mongo -spec: - serviceName: "mongo" - replicas: 1 - selector: - matchLabels: - app: mongo - template: - metadata: - labels: - app: mongo - role: mongo - openebs.io/sts-target-affinity: mongo - environment: test - spec: - terminationGracePeriodSeconds: 10 - containers: - - name: mongo - image: mongo - command: - # - mongod - # - "--replSet" - # - rs0 - # - "--smallfiles" - # - "--noprealloc" - # - "--bind_ip_all" - ports: - - containerPort: 27017 - volumeMounts: - - name: mongo-pvc - mountPath: /data/db - - name: mongo-sidecar - image: cvallance/mongo-k8s-sidecar - env: - - name: MONGO_SIDECAR_POD_LABELS - value: "role=mongo,environment=test" - volumeClaimTemplates: - - metadata: - name: mongo-pvc - labels: - openebs.io/sts-target-affinity: mongo - spec: - storageClassName: openebs-cstor-sparse - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G diff --git a/k8s/demo/mongodb/demo-mongo-localpvhostdevice.yaml b/k8s/demo/mongodb/demo-mongo-localpvhostdevice.yaml deleted file mode 100644 index 194475cda9..0000000000 --- a/k8s/demo/mongodb/demo-mongo-localpvhostdevice.yaml +++ /dev/null @@ -1,65 +0,0 @@ -# Headless service for stable DNS entries of StatefulSet members. -apiVersion: v1 -kind: Service -metadata: - name: mongo - labels: - app: mongo -spec: - ports: - - port: 27017 - targetPort: 27017 - clusterIP: None - selector: - role: mongo ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongo - labels: - app: mongo -spec: - serviceName: "mongo" - replicas: 3 - selector: - matchLabels: - app: mongo - template: - metadata: - labels: - app: mongo - role: mongo - environment: test - spec: - terminationGracePeriodSeconds: 10 - containers: - - name: mongo - image: mongo - command: - # - mongod - # - "--replSet" - # - rs0 - # - "--smallfiles" - # - "--noprealloc" - # - "--bind_ip_all" - ports: - - containerPort: 27017 - volumeMounts: - - name: mongo-pvc - mountPath: /data/db - - name: mongo-sidecar - image: cvallance/mongo-k8s-sidecar - env: - - name: MONGO_SIDECAR_POD_LABELS - value: "role=mongo,environment=test" - volumeClaimTemplates: - - metadata: - name: mongo-pvc - spec: - storageClassName: openebs-device - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G diff --git a/k8s/demo/mongodb/demo-mongo-localpvhostpath.yaml b/k8s/demo/mongodb/demo-mongo-localpvhostpath.yaml deleted file mode 100644 index d19b027d77..0000000000 --- a/k8s/demo/mongodb/demo-mongo-localpvhostpath.yaml +++ /dev/null @@ -1,65 +0,0 @@ -# Headless service for stable DNS entries of StatefulSet members. -apiVersion: v1 -kind: Service -metadata: - name: mongo - labels: - app: mongo -spec: - ports: - - port: 27017 - targetPort: 27017 - clusterIP: None - selector: - role: mongo ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongo - labels: - app: mongo -spec: - serviceName: "mongo" - replicas: 3 - selector: - matchLabels: - app: mongo - template: - metadata: - labels: - app: mongo - role: mongo - environment: test - spec: - terminationGracePeriodSeconds: 10 - containers: - - name: mongo - image: mongo - command: - # - mongod - # - "--replSet" - # - rs0 - # - "--smallfiles" - # - "--noprealloc" - # - "--bind_ip_all" - ports: - - containerPort: 27017 - volumeMounts: - - name: mongo-pvc - mountPath: /data/db - - name: mongo-sidecar - image: cvallance/mongo-k8s-sidecar - env: - - name: MONGO_SIDECAR_POD_LABELS - value: "role=mongo,environment=test" - volumeClaimTemplates: - - metadata: - name: mongo-pvc - spec: - storageClassName: openebs-hostpath - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G diff --git a/k8s/demo/mongodb/mongo-loadgen.yaml b/k8s/demo/mongodb/mongo-loadgen.yaml deleted file mode 100644 index 384c59a261..0000000000 --- a/k8s/demo/mongodb/mongo-loadgen.yaml +++ /dev/null @@ -1,18 +0,0 @@ ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: mongo-loadgen -spec: - template: - metadata: - name: mongo-loadgen - spec: - restartPolicy: Never - containers: - - name: mongo-loadgen - image: openebs/tests-sysbench-mongo - command: ["/bin/bash"] - args: ["-c", "./sysbench/sysbench --mongo-write-concern=1 --mongo-url='mongodb://mongo-0.mongo' --mongo-database-name=sbtest --test=./sysbench/tests/mongodb/oltp.lua --oltp_table_size=100 --oltp_tables_count=10 --num-threads=10 --rand-type=pareto --report-interval=10 --max-requests=0 --max-time=600 --oltp-point-selects=10 --oltp-simple-ranges=1 --oltp-sum-ranges=1 --oltp-order-ranges=1 --oltp-distinct-ranges=1 --oltp-index-updates=1 --oltp-non-index-updates=1 --oltp-inserts=1 run"] - tty: true - diff --git a/k8s/demo/mongodb/mongo-statefulset.yml b/k8s/demo/mongodb/mongo-statefulset.yml deleted file mode 100644 index fe79c24fc5..0000000000 --- a/k8s/demo/mongodb/mongo-statefulset.yml +++ /dev/null @@ -1,87 +0,0 @@ -# Create a StorageClass suited for Mongo StatefulSet -# Since Mongo takes care of replication, one replica will suffice -# Can be configured with Anti affinity topology key of hostname (default) -# or across zone. ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: mongo-pv-az - annotations: - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" - - name: StoragePool - value: default - - name: FSType - value: "xfs" - #- name: ReplicaAntiAffinityTopoKey - # value: failure-domain.beta.kubernetes.io/zone -provisioner: openebs.io/provisioner-iscsi ---- -# Headless service for stable DNS entries of StatefulSet members. -apiVersion: v1 -kind: Service -metadata: - name: mongo - labels: - name: mongo -spec: - ports: - - port: 27017 - targetPort: 27017 - clusterIP: None - selector: - role: mongo ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mongo -spec: - serviceName: "mongo" - replicas: 3 - selector: - matchLabels: - role: mongo - template: - metadata: - labels: - role: mongo - environment: test - #This label will be used by openebs to place in replica - # pod anti-affinity to make sure data of different mongo - # instances are not co-located on the same node - openebs.io/replica-anti-affinity: vehicle-db - spec: - terminationGracePeriodSeconds: 10 - containers: - - name: mongo - image: mongo - command: - - mongod - - "--replSet" - - rs0 - - "--smallfiles" - - "--noprealloc" - - "--bind_ip_all" - ports: - - containerPort: 27017 - volumeMounts: - - name: mongo-persistent-storage - mountPath: /data/db - - name: mongo-sidecar - image: cvallance/mongo-k8s-sidecar - env: - - name: MONGO_SIDECAR_POD_LABELS - value: "role=mongo,environment=test" - volumeClaimTemplates: - - metadata: - name: mongo-persistent-storage - spec: - storageClassName: mongo-pv-az - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G diff --git a/k8s/demo/mysql-replication-cluster/deployments/mysql-master.yaml b/k8s/demo/mysql-replication-cluster/deployments/mysql-master.yaml deleted file mode 100644 index 6d82a14360..0000000000 --- a/k8s/demo/mysql-replication-cluster/deployments/mysql-master.yaml +++ /dev/null @@ -1,63 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: mysql-master - labels: - name: mysql-master -spec: - replicas: 1 - selector: - matchLabels: - name: mysql-master - template: - metadata: - labels: - name: mysql-master - spec: - containers: - - name: master - image: openebs/tests-mysql-master - args: - - "--ignore-db-dir" - - "lost+found" - ports: - - containerPort: 3306 - env: - - name: MYSQL_ROOT_PASSWORD - value: "test" - - name: MYSQL_REPLICATION_USER - value: 'demo' - - name: MYSQL_REPLICATION_PASSWORD - value: 'demo' - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: demo-vol1-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol1-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: mysql-master - labels: - name: mysql-master -spec: - ports: - - port: 3306 - targetPort: 3306 - selector: - name: mysql-master diff --git a/k8s/demo/mysql-replication-cluster/deployments/mysql-slave.yaml b/k8s/demo/mysql-replication-cluster/deployments/mysql-slave.yaml deleted file mode 100644 index 8e8437417b..0000000000 --- a/k8s/demo/mysql-replication-cluster/deployments/mysql-slave.yaml +++ /dev/null @@ -1,63 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: mysql-slave - labels: - name: mysql-slave -spec: - replicas: 1 - selector: - matchLabels: - name: mysql-slave - template: - metadata: - labels: - name: mysql-slave - spec: - containers: - - name: slave - image: openebs/tests-mysql-slave - args: - - "--ignore-db-dir" - - "lost+found" - ports: - - containerPort: 3306 - env: - - name: MYSQL_ROOT_PASSWORD - value: "test" - - name: MYSQL_REPLICATION_USER - value: 'demo' - - name: MYSQL_REPLICATION_PASSWORD - value: 'demo' - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-vol2 - volumes: - - name: demo-vol2 - persistentVolumeClaim: - claimName: demo-vol2-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol2-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: mysql-slave - labels: - name: mysql-slave -spec: - ports: - - port: 3306 - targetPort: 3306 - selector: - name: mysql-slave diff --git a/k8s/demo/mysql-replication-cluster/statefulset/README.md b/k8s/demo/mysql-replication-cluster/statefulset/README.md deleted file mode 100644 index f7b86ccf14..0000000000 --- a/k8s/demo/mysql-replication-cluster/statefulset/README.md +++ /dev/null @@ -1,46 +0,0 @@ -## STEPS TO INSTALL MYSQL REPLICATION CLUSTER - -- Refer to this blog to understand the statefulset pod initialization process: - https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/ - -- Create the mysql configmap - - ``` - test@Master:~/mysql-statefulset$ kubectl apply -f mysql-configmap.yaml - configmap "mysql" created - ``` - -- Create the mysql service - - ``` - test@Master:~/mysql-statefulset$ kubectl apply -f mysql-svc.yaml - service "mysql" created - service "mysql-read" created - ``` - -- Deploy the mysql statefulset - - ``` - test@Master:~/mysql-statefulset$ kubectl apply -f mysql-statefulset.yaml - statefulset "mysql" created - ``` - -- Confirm that the mysql master & slave pods are running - - ``` - test@Master:~/mysql-statefulset$ kubectl get pods - NAME READY STATUS RESTARTS AGE - maya-apiserver-5d5944b47c-fbnjh 1/1 Running 0 1d - mysql-0 2/2 Running 1 20m - mysql-1 2/2 Running 0 17m - openebs-provisioner-6b8df9746c-gj4gk 1/1 Running 0 1d - pvc-462b36d7-2ea9-11e8-9309-000c298ff5fc-ctrl-5467bcb64f-qvnzn 2/2 Running 0 20m - pvc-462b36d7-2ea9-11e8-9309-000c298ff5fc-rep-745fd6b669-2mpvb 1/1 Running 0 20m - pvc-462b36d7-2ea9-11e8-9309-000c298ff5fc-rep-745fd6b669-zzbdp 1/1 Running 0 20m - pvc-b478afc3-2ea9-11e8-9309-000c298ff5fc-ctrl-5ccd6c5469-kfrsp 2/2 Running 0 17m - pvc-b478afc3-2ea9-11e8-9309-000c298ff5fc-rep-67b4f56bf5-9p97m 1/1 Running 0 17m - pvc-b478afc3-2ea9-11e8-9309-000c298ff5fc-rep-67b4f56bf5-r8c9x 1/1 Running 0 17m - ``` - - - diff --git a/k8s/demo/mysql-replication-cluster/statefulset/mysql-configmap.yaml b/k8s/demo/mysql-replication-cluster/statefulset/mysql-configmap.yaml deleted file mode 100644 index 46d34e422c..0000000000 --- a/k8s/demo/mysql-replication-cluster/statefulset/mysql-configmap.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: mysql - labels: - app: mysql -data: - master.cnf: | - # Apply this config only on the master. - [mysqld] - log-bin - slave.cnf: | - # Apply this config only on slaves. - [mysqld] - super-read-only - diff --git a/k8s/demo/mysql-replication-cluster/statefulset/mysql-statefulset.yaml b/k8s/demo/mysql-replication-cluster/statefulset/mysql-statefulset.yaml deleted file mode 100644 index dff65e5209..0000000000 --- a/k8s/demo/mysql-replication-cluster/statefulset/mysql-statefulset.yaml +++ /dev/null @@ -1,176 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: mysql -spec: - selector: - matchLabels: - app: mysql - serviceName: "mysql" - replicas: 2 - template: - metadata: - labels: - app: mysql - spec: - initContainers: - - name: init-mysql - image: mysql:5.7 - command: - - bash - - "-c" - - | - set -ex - # Generate mysql server-id from pod ordinal index. - [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 - ordinal=${BASH_REMATCH[1]} - echo [mysqld] > /mnt/conf.d/server-id.cnf - # Add an offset to avoid reserved server-id=0 value. - echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf - # Copy appropriate conf.d files from config-map to emptyDir. - if [[ $ordinal -eq 0 ]]; then - cp /mnt/config-map/master.cnf /mnt/conf.d/ - else - cp /mnt/config-map/slave.cnf /mnt/conf.d/ - fi - volumeMounts: - - name: conf - mountPath: /mnt/conf.d - - name: config-map - mountPath: /mnt/config-map - - name: clone-mysql - image: gcr.io/google-samples/xtrabackup:1.0 - command: - - bash - - "-c" - - | - set -ex - # Skip the clone if data already exists. - [[ -d /var/lib/mysql/mysql ]] && exit 0 - # Skip the clone on master (ordinal index 0). - [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 - ordinal=${BASH_REMATCH[1]} - [[ $ordinal -eq 0 ]] && exit 0 - # Clone data from previous peer. - ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql - # Prepare the backup. - xtrabackup --prepare --target-dir=/var/lib/mysql - volumeMounts: - - name: maya-database - mountPath: /var/lib/mysql - subPath: mysql - - name: conf - mountPath: /etc/mysql/conf.d - containers: - - name: mysql - image: mysql:5.7 - env: - - name: MYSQL_ALLOW_EMPTY_PASSWORD - value: "1" - - name: MYSQL_DATABASE - value: "maya" - - name: MYSQL_USER - value: "maya" - - name: MYSQL_PASSWORD - value: "maya" - ports: - - name: mysql - containerPort: 3306 - volumeMounts: - - name: maya-database - mountPath: /var/lib/mysql - subPath: mysql - - name: conf - mountPath: /etc/mysql/conf.d - resources: - requests: - cpu: "500m" - memory: "1G" - livenessProbe: - exec: - command: ["mysqladmin", "ping"] - initialDelaySeconds: 30 - periodSeconds: 10 - timeoutSeconds: 5 - readinessProbe: - exec: - # Check we can execute queries over TCP (skip-networking is off). - command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"] - initialDelaySeconds: 5 - periodSeconds: 2 - timeoutSeconds: 1 - - name: xtrabackup - image: gcr.io/google-samples/xtrabackup:1.0 - ports: - - name: xtrabackup - containerPort: 3307 - command: - - bash - - "-c" - - | - set -ex - cd /var/lib/mysql - - # Determine binlog position of cloned data, if any. - if [[ -f xtrabackup_slave_info ]]; then - # XtraBackup already generated a partial "CHANGE MASTER TO" query - # because we're cloning from an existing slave. - mv xtrabackup_slave_info change_master_to.sql.in - # Ignore xtrabackup_binlog_info in this case (it's useless). - rm -f xtrabackup_binlog_info - elif [[ -f xtrabackup_binlog_info ]]; then - # We're cloning directly from master. Parse binlog position. - [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 - rm xtrabackup_binlog_info - echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\ - MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in - fi - - # Check if we need to complete a clone by starting replication. - if [[ -f change_master_to.sql.in ]]; then - echo "Waiting for mysqld to be ready (accepting connections)" - until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done - - echo "Initializing replication from clone position" - # In case of container restart, attempt this at-most-once. - mv change_master_to.sql.in change_master_to.sql.orig - mysql -h 127.0.0.1 < /dev/null 2>&1; exit 0"] -``` - -Run the load generation job - -``` -kubectl apply -f sql-loadgen.yaml -``` - -## Step-4: View performance and storage consumption stats using mayactl - -Performance and capacity usage stats on the OpenEBS storage volume can be viewed -by executing the following mayactl command inside the maya-apiserver pod. Follow -the below sequence of steps to achieve this: - -Start an interactive bash console for the maya-apiserver container - -``` -kubectl exec -it maya-apiserver-1633167387-5ss2w /bin/bash -``` - -Lookup the storage volume name using the ```vsm-list``` command - -``` -karthik@MayaMaster:~$ kubectl exec -it maya-apiserver-1633167387-5ss2w /bin/bash - -root@maya-apiserver-1633167387-5ss2w:/# maya vsm-list -Name Status -pvc-016e9a68-71c1-11e7-9fea-000c298ff5fc Running -``` - -Get the performance and capacity usage stats using the ```vsm-stats``` command. - -``` -root@maya-apiserver-1633167387-5ss2w:/# maya vsm-stats pvc-016e9a68-71c1-11e7-9fea-000c298ff5fc ------------------------------------- - IQN : iqn.2016-09.com.openebs.jiva:pvc-016e9a68-71c1-11e7-9fea-000c298ff5fc - Volume : pvc-016e9a68-71c1-11e7-9fea-000c298ff5fc - Portal : 10.109.70.220:3260 - Size : 5G - - Replica| Status| DataUpdateIndex| - | | | - 10.36.0.3| Online| 4341| - 10.44.0.2| Online| 4340| - ------------- Performance Stats ---------- - - r/s| w/s| r(MB/s)| w(MB/s)| rLat(ms)| wLat(ms)| rBlk(KB)| wBlk(KB)| - 0| 14| 0.000| 14.000| 0.000| 71.325| 0| 1024| - ------------- Capacity Stats ------------- - Logical(GB)| Used(GB)| - 0.074219| 0.000000| - -``` -The above command can be invoked with ```watch``` by providing a desired interval -to continuously monitor stats - -``` -watch -n 1 maya vsm-stats pvc-016e9a68-71c1-11e7-9fea-000c298ff5fc -``` diff --git a/k8s/demo/percona/demo-percona-mysql-pvc.yaml b/k8s/demo/percona/demo-percona-mysql-pvc.yaml deleted file mode 100644 index 50e26d7da4..0000000000 --- a/k8s/demo/percona/demo-percona-mysql-pvc.yaml +++ /dev/null @@ -1,45 +0,0 @@ ---- -apiVersion: v1 -kind: Pod -metadata: - name: percona - labels: - name: percona -spec: - securityContext: - fsGroup: 999 - containers: - - resources: - limits: - cpu: 0.5 - name: percona - image: percona - args: - - "--ignore-db-dir" - - "lost+found" - env: - - name: MYSQL_ROOT_PASSWORD - value: k8sDem0 - ports: - - containerPort: 3306 - name: percona - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: demo-vol1-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol1-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G - diff --git a/k8s/demo/percona/manage-mysql-with-volume-snapshots.md b/k8s/demo/percona/manage-mysql-with-volume-snapshots.md deleted file mode 100644 index 31f598bd08..0000000000 --- a/k8s/demo/percona/manage-mysql-with-volume-snapshots.md +++ /dev/null @@ -1,237 +0,0 @@ -# Manage MySQL With Database Volume Snapshots - -This tutorial provides instructions to create point-in-time snapshots of a MySQL instance and restore database from existing snapshots. - -## Prerequisite - -- A fully configured Kubernetes cluster running the Percona-MySQL deployment with OpenEBS storage class (You can use the sample -deployment specification *percona-openebs-deployment.yml* available in this directory). - -``` -test@Master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -maya-apiserver-3416621614-g6tmq 1/1 Running 1 7d -openebs-provisioner-4230626287-503dv 1/1 Running 1 7d -percona-1869177642-x89sb 1/1 Running 0 2m -pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc-ctrl-3041181545-q5jzf 1/1 Running 0 2m -pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc-rep-3963308777-15g3p 1/1 Running 0 2m -pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc-rep-3963308777-dskm9 1/1 Running 0 2m -``` - -All the steps described should be performed on the Kubernetes master, unless specified otherwise. - -## Step-1: Create a test database - -- Run an interactive shell for the Percona-MySQL pod using the kubectl exec command. - - ``` - test@Master:~$ kubectl exec -it percona-1869177642-x89sb /bin/bash - root@percona-1869177642-x89sb:/# - ``` -- Create a test database with a data record using the mysql client. - - ``` - root@percona-1869177642-x89sb:/# mysql -uroot -pk8sDem0; - mysql: [Warning] Using a password on the command line interface can be insecure. - Welcome to the MySQL monitor. Commands end with ; or \g. - Your MySQL connection id is 3 - Server version: 5.7.19-17 Percona Server (GPL), Release '17', Revision 'e19a6b7b73f' - - Copyright (c) 2009-2017 Percona LLC and/or its affiliates - Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. - - Oracle is a registered trademark of Oracle Corporation and/or its - affiliates. Other names may be trademarks of their respective - owners. - - Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - - mysql> create database testdb; - Query OK, 1 row affected (0.00 sec) - - mysql> show databases; - +--------------------+ - | Database | - +--------------------+ - | information_schema | - | mysql | - | performance_schema | - | sys | - | testdb | - +--------------------+ - 5 rows in set (0.00 sec) - - mysql> use testdb; - Database changed - - mysql> CREATE TABLE Hardware (Name VARCHAR(20),HWtype VARCHAR(20),Model VARCHAR(20)); - Query OK, 0 rows affected (0.16 sec) - - mysql> - mysql> INSERT INTO Hardware (Name,HWtype,Model) VALUES ('TestBox','Server','DellR820'); - Query OK, 1 row affected (0.01 sec) - - mysql> - mysql> select * from Hardware; - +---------+--------+----------+ - | Name | HWtype | Model | - +---------+--------+----------+ - | TestBox | Server | DellR820 | - +---------+--------+----------+ - 1 row in set (0.00 sec) - - - mysql> exit - Bye - root@percona-1869177642-x89sb:/# - root@percona-1869177642-x89sb:/# exit - ``` - -## Step-2: Creating MySQL Database Volume Snapshot - -- Identify the name of the MySQL data volume by executing the following mayactl command. Typically, the -OpenEBS pod names are derived from the volume name, with the string before the "ctrl" or "rep" representing the volume name. - - ``` - test@Master:~$ kubectl exec maya-apiserver-3416621614-g6tmq -c maya-apiserver -- maya volume list - Name Status - pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc Running - ``` -- Create the volume snapshot by executing the following mayactl command - - ``` - test@Master:~$ kubectl exec maya-apiserver-3416621614-g6tmq -c maya-apiserver -- maya snapshot create -volname pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc -snapname snap1 - - Creating Snapshot of Volume : pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc - Created Snapshot is: snap1 - ``` - -## Step-3: Make changes to MySQL server - -- Delete the test database created in the previous steps. - - ``` - test@Master:~$ kubectl exec -it percona-1869177642-x89sb /bin/bash - root@percona-1869177642-x89sb:/# - - root@percona-1869177642-x89sb:/# mysql -uroot -pk8sDem0; - : - mysql> drop database testdb; - Query OK, 1 row affected (0.73 sec) - - mysql> show databases; - +--------------------+ - | Database | - +--------------------+ - | information_schema | - | mysql | - | performance_schema | - | sys | - +--------------------+ - 4 rows in set (0.00 sec) - - mysql> exit - Bye - ``` - -## Step-4: Restore snapshot on the OpenEBS storage volume - -- Revert to snapshot created by executing the following mayactl command - - ``` - test@Master:~$ kubectl exec maya-apiserver-3416621614-g6tmq -c maya-apiserver -- maya snapshot create -volname pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc -snapname snap1 - Snapshot reverted: snap1 - ``` - -## Step-5: Delete the Percona pod to force reschedule and remount - -- The changes caused by the snapshot restore operation on the database can be viewed only when the data volume is remounted. -This can be achieved if the pod is rescheduled. To force a reschedule, delete the pod (since Percona application has been -launched as a Kubernetes deployment, the pod will be rescheduled/recreated on either the same OR on other nodes if available). - - ``` - test@Master:~$ kubectl delete pod percona-1869177642-x89sb - pod "percona-1869177642-x89sb" deleted - ``` - Verify that the pod is rescheduled and has restarted successfully - - ``` - test@Master:~$ kubectl get pods - NAME READY STATUS RESTARTS AGE - maya-apiserver-3416621614-g6tmq 1/1 Running 1 7d - openebs-provisioner-4230626287-503dv 1/1 Running 1 7d - percona-1869177642-llgj5 1/1 Running 0 2m - pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc-ctrl-3041181545-q5jzf 1/1 Running 0 2m - pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc-rep-3963308777-15g3p 1/1 Running 0 2m - pvc-44f45e05-c61f-11e7-a0eb-000c298ff5fc-rep-3963308777-dskm9 1/1 Running 0 2m - ``` - -## Step-6: Verify successful restore of database - -- Verify that the database "testdb" created before snapshot was taken is present. Read the table content to confirm successful -restore. - - ``` - test@Master:~$ kubectl exec -it percona-1869177642-llgj5 /bin/bash - root@percona-1869177642-llgj5:/# - ``` - ``` - root@percona-1869177642-llgj5:/# mysql -uroot -pk8sDem0; - mysql: [Warning] Using a password on the command line interface can be insecure. - Welcome to the MySQL monitor. Commands end with ; or \g. - Your MySQL connection id is 3 - Server version: 5.7.19-17 Percona Server (GPL), Release '17', Revision 'e19a6b7b73f' - - Copyright (c) 2009-2017 Percona LLC and/or its affiliates - Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. - - Oracle is a registered trademark of Oracle Corporation and/or its - affiliates. Other names may be trademarks of their respective - owners. - - Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - - mysql> show databases; - +--------------------+ - | Database | - +--------------------+ - | information_schema | - | mysql | - | performance_schema | - | sys | - | testdb | - +--------------------+ - 5 rows in set (0.00 sec) - - mysql> use testdb; - Database changed - - mysql> select * from Hardware; - +---------+--------+----------+ - | Name | HWtype | Model | - +---------+--------+----------+ - | TestBox | Server | DellR820 | - +---------+--------+----------+ - 1 row in set (0.00 sec) - - mysql> exit - Bye - root@percona-1869177642-llgj5:/# - root@percona-1869177642-llgj5:/# exit - -## Notes - -- If the above procedure is repeated with a larger database load, ensure that the ```flush tables with read lock;``` query is executed -before the snapshot is created. This will flush tables to disk and ensure there are no pending/in-flight queries. Subsequent modifications -to the database can be carried out after executing the ```unlock tables``` query. - - - - - - - - - - - diff --git a/k8s/demo/percona/mysql-exporter.json b/k8s/demo/percona/mysql-exporter.json deleted file mode 100644 index e2b94aad5b..0000000000 --- a/k8s/demo/percona/mysql-exporter.json +++ /dev/null @@ -1,1129 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_PROMETHEUS", - "label": "prometheus", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "5.1.2" - }, - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "5.0.0" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "5.0.0" - }, - { - "type": "panel", - "id": "singlestat", - "name": "Singlestat", - "version": "5.0.0" - } - ], - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": "-- Grafana --", - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "name": "Annotations & Alerts", - "type": "dashboard" - } - ] - }, - "editable": true, - "gnetId": "", - "graphTooltip": 0, - "id": null, - "iteration": 1527084642291, - "links": [], - "panels": [ - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 0 - }, - "id": 17, - "panels": [], - "title": "Global status", - "type": "row" - }, - { - "cacheTimeout": null, - "colorBackground": true, - "colorValue": false, - "colors": [ - "#bf1b00", - "#508642", - "#ef843c" - ], - "datasource": "${DS_PROMETHEUS}", - "format": "none", - "gauge": { - "maxValue": 1, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 7, - "w": 6, - "x": 0, - "y": 1 - }, - "id": 11, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": true, - "lineColor": "rgb(31, 120, 193)", - "show": true - }, - "tableColumn": "", - "targets": [ - { - "expr": "mysql_up{release=\"$release\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "1,2", - "title": "Instance Up", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": true, - "colorValue": false, - "colors": [ - "#d44a3a", - "rgba(237, 129, 40, 0.89)", - "#508642" - ], - "datasource": "${DS_PROMETHEUS}", - "format": "s", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 7, - "w": 6, - "x": 6, - "y": 1 - }, - "id": 15, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": true - }, - "tableColumn": "", - "targets": [ - { - "expr": "mysql_global_status_uptime{release=\"$release\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "25200,32400", - "title": "Uptime", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "gridPos": { - "h": 7, - "w": 12, - "x": 12, - "y": 1 - }, - "id": 29, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "mysql_global_status_max_used_connections{release=\"$release\"}", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "current", - "refId": "A" - }, - { - "expr": "mysql_global_variables_max_connections{release=\"$release\"}", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "Max", - "refId": "B" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Mysql Connections", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 8 - }, - "id": 19, - "panels": [], - "title": "I/O", - "type": "row" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 9 - }, - "id": 5, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ - { - "alias": "write", - "transform": "negative-Y" - } - ], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(mysql_global_status_innodb_data_reads{release=\"$release\"}[10m])", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "reads", - "refId": "A" - }, - { - "expr": "irate(mysql_global_status_innodb_data_writes{release=\"$release\"}[10m])", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "write", - "refId": "B" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "mysql disk reads vs writes", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 9 - }, - "id": 9, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ - { - "alias": "/sent/", - "transform": "negative-Y" - } - ], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(mysql_global_status_bytes_received{release=\"$release\"}[5m])", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "received", - "refId": "A" - }, - { - "expr": "irate(mysql_global_status_bytes_sent{release=\"$release\"}[5m])", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "sent", - "refId": "B" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "mysql network received vs sent", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "gridPos": { - "h": 7, - "w": 12, - "x": 0, - "y": 18 - }, - "id": 2, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(mysql_global_status_commands_total{release=\"$release\"}[5m]) > 0", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "{{ command }} - {{ release }}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Query rates", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "gridPos": { - "h": 7, - "w": 12, - "x": 12, - "y": 18 - }, - "id": 25, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "mysql_global_status_threads_running{release=\"$release\"} ", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Running Threads", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "decimals": null, - "format": "short", - "label": null, - "logBase": 1, - "max": "15", - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 25 - }, - "id": 21, - "panels": [], - "title": "Errors", - "type": "row" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "description": "The number of connections that were aborted because the client died without closing the connection properly. See Section B.5.2.10, “Communication Errors and Aborted Connections”.", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 26 - }, - "id": 13, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "mysql_global_status_aborted_clients{release=\"$release\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "B" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Aborted clients", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "description": "The number of failed attempts to connect to the MySQL server. See Section B.5.2.10, “Communication Errors and Aborted Connections”.\n\nFor additional connection-related information, check the Connection_errors_xxx status variables and the host_cache table.", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 26 - }, - "id": 4, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "mysql_global_status_aborted_connects{release=\"$release\"}", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "mysql aborted Connects", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 35 - }, - "id": 23, - "panels": [], - "title": "Disk usage", - "type": "row" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 36 - }, - "id": 27, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(mysql_info_schema_table_size{component=\"data_length\",release=\"$release\"})", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "Tables", - "refId": "A" - }, - { - "expr": "sum(mysql_info_schema_table_size{component=\"index_length\",release=\"$release\"})", - "format": "time_series", - "intervalFactor": 1, - "legendFormat": "Indexes", - "refId": "B" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Disk usage tables / indexes", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "decbytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 36 - }, - "id": 7, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(mysql_info_schema_table_rows{release=\"$release\"})", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Sum of all rows", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "decimals": null, - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "schemaVersion": 16, - "style": "dark", - "tags": [ - "Databases", - "backgroundservices" - ], - "templating": { - "list": [ - { - "allValue": null, - "current": {}, - "datasource": "${DS_PROMETHEUS}", - "hide": 0, - "includeAll": false, - "label": null, - "multi": false, - "name": "release", - "options": [], - "query": "label_values(mysql_up,release)", - "refresh": 1, - "regex": "", - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-1h", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "", - "title": "Mysql - Prometheus", - "uid": "6-kPlS7ik", - "version": 16, - "description": "Basic Mysql dashboard for the prometheus expo diff --git a/k8s/demo/percona/mysql-exporter.yaml b/k8s/demo/percona/mysql-exporter.yaml deleted file mode 100644 index 2659914014..0000000000 --- a/k8s/demo/percona/mysql-exporter.yaml +++ /dev/null @@ -1,60 +0,0 @@ ---- -# Source: prometheus-mysql-exporter/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: prometheus-mysql-exporter - labels: - app: prometheus-mysql-exporter -spec: - type: ClusterIP - ports: - - port: 9104 - targetPort: 9104 - protocol: TCP - name: mysql-exporter - selector: - app: prometheus-mysql-exporter ---- -# Source: prometheus-mysql-exporter/templates/deployment.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: prometheus-mysql-exporter - labels: - app: prometheus-mysql-exporter -spec: - replicas: 1 - selector: - matchLabels: - app: prometheus-mysql-exporter - template: - metadata: - labels: - app: prometheus-mysql-exporter - annotations: - prometheus.io/path: /metrics - prometheus.io/port: "9104" - prometheus.io/scrape: "true" - - spec: - containers: - - name: prometheus-mysql-exporter - image: "prom/mysqld-exporter:v0.11.0" - imagePullPolicy: IfNotPresent - env: - - name: DATA_SOURCE_NAME - value: "root:k8sDem0@(10.47.249.175:3306)/" - ports: - - containerPort: 9104 - livenessProbe: - httpGet: - path: / - port: 9104 - readinessProbe: - httpGet: - path: / - port: 9104 - resources: - {} ---- diff --git a/k8s/demo/percona/percona-cstor-clone.yaml b/k8s/demo/percona/percona-cstor-clone.yaml deleted file mode 100644 index 78d576b005..0000000000 --- a/k8s/demo/percona/percona-cstor-clone.yaml +++ /dev/null @@ -1,60 +0,0 @@ ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: percona-clone-cstor - labels: - name: percona-clone-cstor -spec: - replicas: 1 - selector: - matchLabels: - name: percona-clone-cstor - template: - metadata: - labels: - name: percona-clone-cstor - spec: - securityContext: - fsGroup: 999 - tolerations: - - key: "ak" - value: "av" - operator: "Equal" - effect: "NoSchedule" - containers: - - resources: - limits: - cpu: 0.5 - name: percona-clone-cstor - image: percona - args: - - "--ignore-db-dir" - - "lost+found" - env: - - name: MYSQL_ROOT_PASSWORD - value: k8sDem0 - ports: - - containerPort: 3306 - name: percona - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-cstor-vol1 - volumes: - - name: demo-cstor-vol1 - persistentVolumeClaim: - claimName: demo-cstor-snap-vol-claim ---- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: demo-cstor-snap-vol-claim - namespace: default - annotations: - snapshot.alpha.kubernetes.io/snapshot: cstor-snapshot-demo -spec: - storageClassName: openebs-snapshot-promoter - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5Gi diff --git a/k8s/demo/percona/percona-cstor-snap.yaml b/k8s/demo/percona/percona-cstor-snap.yaml deleted file mode 100644 index bf1b3ea0d4..0000000000 --- a/k8s/demo/percona/percona-cstor-snap.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: volumesnapshot.external-storage.k8s.io/v1 -kind: VolumeSnapshot -metadata: - name: cstor-snapshot-demo - namespace: default -spec: - persistentVolumeClaimName: demo-cstor-sparse-vol1-claim diff --git a/k8s/demo/percona/percona-cstor-sparse.yaml b/k8s/demo/percona/percona-cstor-sparse.yaml deleted file mode 100644 index 6e084064cd..0000000000 --- a/k8s/demo/percona/percona-cstor-sparse.yaml +++ /dev/null @@ -1,72 +0,0 @@ ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: percona-cstor - labels: - name: percona-cstor -spec: - replicas: 1 - selector: - matchLabels: - name: percona-cstor - template: - metadata: - labels: - name: percona-cstor - spec: - securityContext: - fsGroup: 999 - tolerations: - - key: "ak" - value: "av" - operator: "Equal" - effect: "NoSchedule" - containers: - - resources: - limits: - cpu: 0.5 - name: percona-cstor - image: percona - args: - - "--ignore-db-dir" - - "lost+found" - env: - - name: MYSQL_ROOT_PASSWORD - value: k8sDem0 - ports: - - containerPort: 3306 - name: percona - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-cstor-vol1 - volumes: - - name: demo-cstor-vol1 - persistentVolumeClaim: - claimName: demo-cstor-sparse-vol1-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-cstor-sparse-vol1-claim -spec: - storageClassName: openebs-cstor-sparse - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: percona-cstor-mysql - labels: - name: percona-cstor-mysql -spec: - ports: - - port: 3306 - targetPort: 3306 - selector: - name: percona-cstor - diff --git a/k8s/demo/percona/percona-jiva-1r.yaml b/k8s/demo/percona/percona-jiva-1r.yaml deleted file mode 100644 index 6f12d3f3cd..0000000000 --- a/k8s/demo/percona/percona-jiva-1r.yaml +++ /dev/null @@ -1,64 +0,0 @@ ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: percona-j1r - labels: - name: percona-j1r -spec: - replicas: 1 - selector: - matchLabels: - name: percona-j1r - template: - metadata: - labels: - name: percona-j1r - spec: - securityContext: - fsGroup: 999 - containers: - - name: percona - image: percona - args: - - "--ignore-db-dir" - - "lost+found" - env: - - name: MYSQL_ROOT_PASSWORD - value: k8sDem0 - ports: - - containerPort: 3306 - name: percona - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: percona-j1r-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: percona-j1r-claim -spec: - storageClassName: jiva-1r - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: percona-mysql-j1r - labels: - name: percona-mysql-j1r -spec: - ports: - - port: 3306 - targetPort: 3306 - selector: - name: percona-j1r - diff --git a/k8s/demo/percona/percona-jiva-default.yaml b/k8s/demo/percona/percona-jiva-default.yaml deleted file mode 100644 index 716b4bc7ff..0000000000 --- a/k8s/demo/percona/percona-jiva-default.yaml +++ /dev/null @@ -1,64 +0,0 @@ ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: percona-jd - labels: - name: percona-jd -spec: - replicas: 1 - selector: - matchLabels: - name: percona-jd - template: - metadata: - labels: - name: percona-jd - spec: - securityContext: - fsGroup: 999 - containers: - - name: percona - image: percona - args: - - "--ignore-db-dir" - - "lost+found" - env: - - name: MYSQL_ROOT_PASSWORD - value: k8sDem0 - ports: - - containerPort: 3306 - name: percona-jd - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: percona-jd-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: percona-jd-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: percona-mysql-jd - labels: - name: percona-mysql-jd -spec: - ports: - - port: 3306 - targetPort: 3306 - selector: - name: percona-jd - diff --git a/k8s/demo/percona/percona-openebs-cstor-sparse-deployment.yaml b/k8s/demo/percona/percona-openebs-cstor-sparse-deployment.yaml deleted file mode 100644 index 6e084064cd..0000000000 --- a/k8s/demo/percona/percona-openebs-cstor-sparse-deployment.yaml +++ /dev/null @@ -1,72 +0,0 @@ ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: percona-cstor - labels: - name: percona-cstor -spec: - replicas: 1 - selector: - matchLabels: - name: percona-cstor - template: - metadata: - labels: - name: percona-cstor - spec: - securityContext: - fsGroup: 999 - tolerations: - - key: "ak" - value: "av" - operator: "Equal" - effect: "NoSchedule" - containers: - - resources: - limits: - cpu: 0.5 - name: percona-cstor - image: percona - args: - - "--ignore-db-dir" - - "lost+found" - env: - - name: MYSQL_ROOT_PASSWORD - value: k8sDem0 - ports: - - containerPort: 3306 - name: percona - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-cstor-vol1 - volumes: - - name: demo-cstor-vol1 - persistentVolumeClaim: - claimName: demo-cstor-sparse-vol1-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-cstor-sparse-vol1-claim -spec: - storageClassName: openebs-cstor-sparse - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: percona-cstor-mysql - labels: - name: percona-cstor-mysql -spec: - ports: - - port: 3306 - targetPort: 3306 - selector: - name: percona-cstor - diff --git a/k8s/demo/percona/percona-openebs-deployment-create-snap.yaml b/k8s/demo/percona/percona-openebs-deployment-create-snap.yaml deleted file mode 100644 index 09e6e0e750..0000000000 --- a/k8s/demo/percona/percona-openebs-deployment-create-snap.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: volumesnapshot.external-storage.k8s.io/v1 -kind: VolumeSnapshot -metadata: - name: snapshot-demo - namespace: default -spec: - persistentVolumeClaimName: demo-vol1-claim diff --git a/k8s/demo/percona/percona-openebs-deployment-promote-snap.yaml b/k8s/demo/percona/percona-openebs-deployment-promote-snap.yaml deleted file mode 100644 index 2a829f3208..0000000000 --- a/k8s/demo/percona/percona-openebs-deployment-promote-snap.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: demo-snap-vol-claim - namespace: default - annotations: - snapshot.alpha.kubernetes.io/snapshot: snapshot-demo -spec: - storageClassName: openebs-snapshot-promoter - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5Gi diff --git a/k8s/demo/percona/percona-openebs-deployment.yaml b/k8s/demo/percona/percona-openebs-deployment.yaml deleted file mode 100644 index 1bbac713cd..0000000000 --- a/k8s/demo/percona/percona-openebs-deployment.yaml +++ /dev/null @@ -1,72 +0,0 @@ ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: percona - labels: - name: percona -spec: - replicas: 1 - selector: - matchLabels: - name: percona - template: - metadata: - labels: - name: percona - spec: - securityContext: - fsGroup: 999 - tolerations: - - key: "ak" - value: "av" - operator: "Equal" - effect: "NoSchedule" - containers: - - resources: - limits: - cpu: 0.5 - name: percona - image: percona - args: - - "--ignore-db-dir" - - "lost+found" - env: - - name: MYSQL_ROOT_PASSWORD - value: k8sDem0 - ports: - - containerPort: 3306 - name: percona - volumeMounts: - - mountPath: /var/lib/mysql - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: demo-vol1-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol1-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G ---- -apiVersion: v1 -kind: Service -metadata: - name: percona-mysql - labels: - name: percona-mysql -spec: - ports: - - port: 3306 - targetPort: 3306 - selector: - name: percona - diff --git a/k8s/demo/percona/sql-loadgen.yaml b/k8s/demo/percona/sql-loadgen.yaml deleted file mode 100644 index 3bbee1f98c..0000000000 --- a/k8s/demo/percona/sql-loadgen.yaml +++ /dev/null @@ -1,18 +0,0 @@ ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: sql-loadgen -spec: - template: - metadata: - name: sql-loadgen - spec: - restartPolicy: Never - containers: - - name: sql-loadgen - image: openebs/tests-mysql-client - command: ["/bin/bash"] - args: ["-c", "timelimit -t 300 sh MySQLLoadGenerate.sh 10.47.250.49 > /dev/null 2>&1; exit 0"] - tty: true - diff --git a/k8s/demo/pvc-single-replica-jiva.yaml b/k8s/demo/pvc-single-replica-jiva.yaml deleted file mode 100644 index ec0f8c1837..0000000000 --- a/k8s/demo/pvc-single-replica-jiva.yaml +++ /dev/null @@ -1,24 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-standalone - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol1-claim -spec: - storageClassName: openebs-standalone - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5Gi - diff --git a/k8s/demo/pvc-standard-cstor-default.yaml b/k8s/demo/pvc-standard-cstor-default.yaml deleted file mode 100644 index f3cb59094c..0000000000 --- a/k8s/demo/pvc-standard-cstor-default.yaml +++ /dev/null @@ -1,11 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-cstor-vol1-claim -spec: - storageClassName: openebs-cstor-sparse - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G diff --git a/k8s/demo/pvc-standard-jiva-default.yaml b/k8s/demo/pvc-standard-jiva-default.yaml deleted file mode 100644 index 910c60edf9..0000000000 --- a/k8s/demo/pvc-standard-jiva-default.yaml +++ /dev/null @@ -1,12 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol1-claim -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/demo/pvc.yaml b/k8s/demo/pvc.yaml deleted file mode 100644 index 782056ceef..0000000000 --- a/k8s/demo/pvc.yaml +++ /dev/null @@ -1,12 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol1-claim -spec: - storageClassName: openebs-standard - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/demo/rabbitmq/cleanup.sh b/k8s/demo/rabbitmq/cleanup.sh deleted file mode 100755 index b587777d29..0000000000 --- a/k8s/demo/rabbitmq/cleanup.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash - - -kubectl delete statefulset rabbitmq -kubectl delete svc rabbitmq rabbitmq-management -kubectl delete secrets rabbitmq-config -kubectl delete pvc -l app=rabbitmq diff --git a/k8s/demo/rabbitmq/rabbitmq-statefulset.yaml b/k8s/demo/rabbitmq/rabbitmq-statefulset.yaml deleted file mode 100644 index 3d36a81409..0000000000 --- a/k8s/demo/rabbitmq/rabbitmq-statefulset.yaml +++ /dev/null @@ -1,94 +0,0 @@ ---- -apiVersion: v1 -kind: Service -metadata: - # Expose the management HTTP port on each node - name: rabbitmq-management - labels: - app: rabbitmq -spec: - ports: - - port: 15672 - name: http - selector: - app: rabbitmq - type: NodePort # Or LoadBalancer in production w/ proper security ---- -apiVersion: v1 -kind: Service -metadata: - # The required headless service for StatefulSets - name: rabbitmq - labels: - app: rabbitmq -spec: - ports: - - port: 5672 - name: amqp - - port: 4369 - name: epmd - - port: 25672 - name: rabbitmq-dist - clusterIP: None - selector: - app: rabbitmq ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: rabbitmq -spec: - serviceName: "rabbitmq" - replicas: 3 - selector: - matchLabels: - app: rabbitmq - template: - metadata: - labels: - app: rabbitmq - spec: - containers: - - name: rabbitmq - image: rabbitmq:3.6.6-management-alpine - lifecycle: - postStart: - exec: - command: - - /bin/sh - - -c - - > - if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then - sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new; - cat /etc/resolv.conf.new > /etc/resolv.conf; - rm /etc/resolv.conf.new; - fi; - until rabbitmqctl node_health_check; do sleep 1; done; - if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then - rabbitmqctl stop_app; - rabbitmqctl join_cluster rabbit@rabbitmq-0; - rabbitmqctl start_app; - fi; - rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}' - env: - - name: RABBITMQ_ERLANG_COOKIE - valueFrom: - secretKeyRef: - name: rabbitmq-config - key: erlang-cookie - ports: - - containerPort: 5672 - name: amqp - volumeMounts: - - name: rabbitmq - mountPath: /var/lib/rabbitmq - volumeClaimTemplates: - - metadata: - name: rabbitmq - annotations: - volume.beta.kubernetes.io/storage-class: openebs-jiva-default - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5G diff --git a/k8s/demo/rabbitmq/run.sh b/k8s/demo/rabbitmq/run.sh deleted file mode 100755 index 5cf8b45753..0000000000 --- a/k8s/demo/rabbitmq/run.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash - -# Create Secret Cookie -kubectl create secret generic rabbitmq-config --from-literal=erlang-cookie=rabbitmq-k8s-Dem0 - -# Apply the StatefulSet -kubectl apply -f rabbitmq-statefulset.yaml - diff --git a/k8s/demo/redis/README.md b/k8s/demo/redis/README.md deleted file mode 100644 index 65612e419a..0000000000 --- a/k8s/demo/redis/README.md +++ /dev/null @@ -1,118 +0,0 @@ -# Redis - -This document demonstrates the deployment of Redis as a StatefulSet in a Kubernetes cluster. The user can spawn a Redis StatefulSet that will use OpenEBS as its persistent storage. - -## Deploy as a StatefulSet - -Deploying Redis as a StatefulSet provides the following benefits: - -- Stable unique network identifiers. -- Stable persistent storage. -- Ordered graceful deployment and scaling. -- Ordered graceful deletion and termination. - -## Deploy Redis with Persistent Storage - -Before getting started check the status of the cluster: - -```bash -ubuntu@kubemaster:~kubectl get nodes -NAME STATUS AGE VERSION -kubemaster Ready 3d v1.8.2 -kubeminion-01 Ready 3d v1.8.2 -kubeminion-02 Ready 3d v1.8.2 - -``` - -Download and apply the Redis YAML from OpenEBS repository: - -```bash -ubuntu@kubemaster:~wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/redis/redis-statefulset.yml -ubuntu@kubemaster:~kubectl apply -f redis-statefulset.yml - -``` - -Get the status of running pods: - -```bash -ubuntu@kubemaster:~$ kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -default maya-apiserver-6fc5b4d59c-mg9k2 1/1 Running 0 6d -default openebs-provisioner-6d9b78696d-h647b 1/1 Running 0 6d -default pvc-1f305192-ca11-11e7-892e-000c29119159-ctrl-777f4dbd8c-znd7k 1/1 Running 0 19h -default pvc-1f305192-ca11-11e7-892e-000c29119159-rep-7d9c58bff8-ch6xw 1/1 Running 0 19h -default pvc-1f305192-ca11-11e7-892e-000c29119159-rep-7d9c58bff8-jnpzn 1/1 Running 0 19h -default pvc-59eea5e9-ca11-11e7-892e-000c29119159-ctrl-66c4878c46-mjlzl 1/1 Running 0 19h -default pvc-59eea5e9-ca11-11e7-892e-000c29119159-rep-7c7c5984cd-jb9f6 1/1 Running 0 19h -default pvc-59eea5e9-ca11-11e7-892e-000c29119159-rep-7c7c5984cd-jml24 1/1 Running 0 19h -default pvc-e7b2a235-ca10-11e7-892e-000c29119159-ctrl-6478bfbff6-95gm5 1/1 Running 0 19h -default pvc-e7b2a235-ca10-11e7-892e-000c29119159-rep-f9f46b858-8fmt4 1/1 Running 0 19h -default pvc-e7b2a235-ca10-11e7-892e-000c29119159-rep-f9f46b858-jt25r 1/1 Running 0 19h -default rd-0 1/1 Running 0 19h -default rd-1 1/1 Running 0 19h -default rd-2 1/1 Running 0 19h -kube-system etcd-o-master01 1/1 Running 0 6d -kube-system kube-apiserver-o-master01 1/1 Running 0 6d -kube-system kube-controller-manager-o-master01 1/1 Running 0 6d -kube-system kube-dns-545bc4bfd4-m4ngc 3/3 Running 0 6d -kube-system kube-proxy-4ml5l 1/1 Running 0 6d -kube-system kube-proxy-7jlpf 1/1 Running 0 6d -kube-system kube-proxy-cxkpc 1/1 Running 0 6d -kube-system kube-scheduler-o-master01 1/1 Running 0 6d -kube-system weave-net-ctfk4 2/2 Running 0 6d -kube-system weave-net-dwszp 2/2 Running 0 6d -kube-system weave-net-pzbb7 2/2 Running 0 6d - -``` - -Get the status of running StatefulSets: - -```bash -ubuntu@kubemaster:~$ kubectl get statefulset -NAME DESIRED CURRENT AGE -rd 3 3 19h - -``` - -Get the status of underlying persistent volumes used by Redis StatefulSet: - -```bash -ubuntu@kubemaster:~$ kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -datadir-rd-0 Bound pvc-e7b2a235-ca10-11e7-892e-000c29119159 1G RWO openebs-redis 19h -datadir-rd-1 Bound pvc-1f305192-ca11-11e7-892e-000c29119159 1G RWO openebs-redis 19h -datadir-rd-2 Bound pvc-59eea5e9-ca11-11e7-892e-000c29119159 1G RWO openebs-redis 19h - -``` - -Get the status of the services: - -```bash -ubuntu@kubemaster:~kubectl get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kubernetes ClusterIP 10.96.0.1 443/TCP 6d -maya-apiserver-service ClusterIP 10.111.26.252 5656/TCP 6d -pvc-1f305192-ca11-11e7-892e-000c29119159-ctrl-svc ClusterIP 10.105.218.103 3260/TCP,9501/TCP 19h -pvc-59eea5e9-ca11-11e7-892e-000c29119159-ctrl-svc ClusterIP 10.106.116.112 3260/TCP,9501/TCP 19h -pvc-e7b2a235-ca10-11e7-892e-000c29119159-ctrl-svc ClusterIP 10.102.32.23 3260/TCP,9501/TCP 19h -redis ClusterIP None 6379/TCP 19h - -``` - -## Check Redis Replication - -Set a key:value pair in the Redis master. - -```bash -ubuntu@kubemaster:~kubectl exec rd-0 -- /opt/redis/redis-cli -h rd-0.redis SET replicated:test true -OK - -``` - -Retrieve the value of the key from a Redis slave. - -```bash -ubuntu@kubemaster:~kubectl exec rd-2 -- /opt/redis/redis-cli -h rd-0.redis GET replicated:test -true - -``` \ No newline at end of file diff --git a/k8s/demo/redis/redis-cluster.yml b/k8s/demo/redis/redis-cluster.yml deleted file mode 100644 index c3398669ad..0000000000 --- a/k8s/demo/redis/redis-cluster.yml +++ /dev/null @@ -1,103 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: redis-master - labels: - app: redis-master -spec: - clusterIP: None - ports: - - port: 6379 - selector: - app: redis-master - ---- -apiVersion: v1 -kind: Service -metadata: - name: redis-replica - labels: - app: redis-replica -spec: - clusterIP: None - ports: - - port: 6379 - selector: - app: redis-replica - ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: redis-master -spec: - serviceName: redis-master - replicas: 1 - selector: - matchLabels: - app: redis-master - template: - metadata: - name: redis-master - labels: - app: redis-master - spec: - containers: - - name: redis - image: redis:latest - args: [ "--appendonly","yes" ] - ports: - - containerPort: 6379 - volumeMounts: - - name: redis-data - mountPath: /data - volumeClaimTemplates: - - metadata: - name: redis-data - annotations: - volume.beta.kubernetes.io/storage-class: openebs-standard - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5G - ---- -apiVersion: apps/v1beta1 -kind: StatefulSet -metadata: - name: redis-replica -spec: - serviceName: redis-replica - replicas: 3 - selector: - matchLabels: - app: redis-replica - template: - metadata: - name: redis-replica - labels: - app: redis-replica - spec: - containers: - - name: redis - image: redis:latest - args: [ - "--appendonly","yes" , - "--slaveof", "redis-master", "6379" - ] - ports: - - containerPort: 6379 - volumeMounts: - - name: redis-data - mountPath: /data - volumeClaimTemplates: - - metadata: - name: redis-data - annotations: - volume.beta.kubernetes.io/storage-class: openebs-jiva-default - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5G diff --git a/k8s/demo/redis/redis-standalone.yml b/k8s/demo/redis/redis-standalone.yml deleted file mode 100644 index e86d1102fc..0000000000 --- a/k8s/demo/redis/redis-standalone.yml +++ /dev/null @@ -1,48 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: redis-standalone - labels: - app: redis-standalone -spec: - clusterIP: None - ports: - - port: 6379 - selector: - app: redis-standalone ---- -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: redis-standalone -spec: - serviceName: redis-standalone - replicas: 1 - selector: - matchLabels: - app: redis-standalone - template: - metadata: - name: redis-standalone - labels: - app: redis-standalone - spec: - containers: - - name: redis - image: redis:latest - args: [ "--appendonly", "yes" ] - ports: - - containerPort: 6379 - volumeMounts: - - name: redis-data - mountPath: /data - volumeClaimTemplates: - - metadata: - name: redis-data - annotations: - volume.beta.kubernetes.io/storage-class: openebs-standard - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 5G diff --git a/k8s/demo/redis/redis-statefulset.yml b/k8s/demo/redis/redis-statefulset.yml deleted file mode 100644 index aa6e74f392..0000000000 --- a/k8s/demo/redis/redis-statefulset.yml +++ /dev/null @@ -1,102 +0,0 @@ -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: rd -spec: - serviceName: "redis" - replicas: 3 - selector: - matchLabels: - app: redis - template: - metadata: - labels: - app: redis - spec: - initContainers: - - name: install - image: gcr.io/google_containers/redis-install-3.2.0:e2e - imagePullPolicy: Always - args: - - "--install-into=/opt" - - "--work-dir=/work-dir" - volumeMounts: - - name: opt - mountPath: "/opt" - - name: workdir - mountPath: "/work-dir" - - name: bootstrap - image: debian:jessie - command: - - "/work-dir/peer-finder" - args: - - -on-start="/work-dir/on-start.sh" - - "-service=redis" - env: - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - volumeMounts: - - name: opt - mountPath: "/opt" - - name: workdir - mountPath: "/work-dir" - containers: - - name: redis - image: debian:jessie - ports: - - containerPort: 6379 - name: peer - command: - - /opt/redis/redis-server - args: - - /opt/redis/redis.conf - readinessProbe: - exec: - command: - - sh - - -c - - "/opt/redis/redis-cli -h $(hostname) ping" - initialDelaySeconds: 15 - timeoutSeconds: 5 - volumeMounts: - - name: datadir - mountPath: /data - - name: opt - mountPath: /opt - volumes: - - name: opt - emptyDir: {} - - name: workdir - emptyDir: {} - volumeClaimTemplates: - - metadata: - name: datadir - annotations: - volume.beta.kubernetes.io/storage-class: openebs-jiva-default - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 1G ---- -# A headless service to create DNS records -apiVersion: v1 -kind: Service -metadata: - annotations: - service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" - name: redis - labels: - app: redis -spec: - ports: - - port: 6379 - name: peer - # *.redis.default.svc.cluster.local - clusterIP: None - selector: - app: redis - diff --git a/k8s/demo/scripts/README.md b/k8s/demo/scripts/README.md deleted file mode 100644 index 4306cf99d7..0000000000 --- a/k8s/demo/scripts/README.md +++ /dev/null @@ -1,63 +0,0 @@ -# Using OpenEBS Storage with Kubernetes - -We have made it easy to setup an demo environment for trying OpenEBS Storage with Kubernetes Cluster. - -All you need is, **four** Ubuntu 16.04 Hosts/VMs with 8+ GB RAM and 8+ Core CPU installed with: -- VirtualBox 5.1 or above -- and ofcourse Git - -Setup your local demo directory, say **demo** on those Hosts/VMs - -``` -mkdir demo -cd demo -git clone https://github.com/openebs/openebs.git -``` - -This may take few minutes depending on your network speed. - -You will need two Hosts/VMs with **passwordless SSH** between them, to set up Kubernetes: -- Kubernetes Master (kubemaster-01) -- Kubernetes Minion (kubeminion-01) - -On the Kubernetes Master change to the **scripts** folder and run the script to install Kubernetes Master. - -``` -cd openebs/k8s-demo/scripts -./setup_k8s_master.sh -``` - -On the Kubernetes Minion change to the **scripts** folder and run the script to install Kubernetes Minion. - -``` -cd openebs/k8s-demo/scripts -./setup_k8s_host.sh -``` - -You will need two Hosts/VMs, to set up OpenEBS: -- OpenEBS Maya Master (omm-01) -- OpenEBS Storage Host (osh-01) - -On the OpenEBS Maya Master change to the **scripts** folder and run the script to install OpenEBS Maya Master. - -``` -cd openebs/k8s-demo/scripts -./setup_omm.sh -source ~/.profile -``` - -On the OpenEBS Storage Host change to the **scripts** folder and run the script to install OpenEBS Storage Host. - -``` -cd openebs/k8s-demo/scripts -./setup_osh.sh -source ~/.profile -``` -You will have the following machines ready to use: -- Kubernetes Master (kubemaster-01) -- Kubernetes Minion (kubeminion-01) -- OpenEBS Maya Master (omm-01) -- OpenEBS Storage Host (osh-01) - -## How-T0 -- [Setup Passwordless SSH](./setup-passwordless-ssh.md) diff --git a/k8s/demo/scripts/setup-passwordless-ssh.md b/k8s/demo/scripts/setup-passwordless-ssh.md deleted file mode 100644 index addeba9ec6..0000000000 --- a/k8s/demo/scripts/setup-passwordless-ssh.md +++ /dev/null @@ -1,47 +0,0 @@ -# Setting Up Passwordless SSH between Local and Remote Host - -A passwordless SSH setup between a local and remote host can be completed in 3 easy steps: - -### Step 1: - -Create a SSH Key for a local user on local-host using **ssh-keygen**: - -``` -localuser@local-host$ [Note: You are on local-host here] - -localuser@local-host$ ssh-keygen -Generating public/private rsa key pair. -Enter file in which to save the key (/home/localuser/.ssh/id_rsa):[Enter key] -Enter passphrase (empty for no passphrase): [Press enter key] -Enter same passphrase again: [Pess enter key] -Your identification has been saved in /home/localuser/.ssh/id_rsa. -Your public key has been saved in /home/localuser/.ssh/id_rsa.pub. -The key fingerprint is: -33:b3:fe:af:95:95:18:11:31:d5:de:96:2f:f2:35:f9 localuser@local-host -``` - -### Step 2: - -Copy the public key to the remote-host(preferably IPAddress) using **ssh-copy-id**: - -``` -localuser@local-host$ ssh-copy-id -i ~/.ssh/id_rsa.pub remoteuser@remote-host -remoteuser@remote-host's password: -Now try logging into the machine, with "ssh 'remote-host'", and check in: - -.ssh/authorized_keys - -to make sure we haven't added extra keys that you weren't expecting. -``` - -### Step 3: - -Login to remote-host(preferably IPAddress) without entering the password. - -``` -localuser@local-host$ ssh remoteuser@remote-host -Last login: Sun Nov 16 17:22:33 2008 from 192.168.1.2 -[Note: SSH did not ask for password.] - -remoteuser@remote-host$ [Note: You are on remote-host here] -``` \ No newline at end of file diff --git a/k8s/demo/scripts/setup_k8s_host.sh b/k8s/demo/scripts/setup_k8s_host.sh deleted file mode 100755 index 820f43037b..0000000000 --- a/k8s/demo/scripts/setup_k8s_host.sh +++ /dev/null @@ -1,200 +0,0 @@ -#!/bin/bash - -# Note:This script assumes the user has the permission to ssh into the master machine. -# Variables: -machineip= -masterip= -clusterip= -token= -hostname=`hostname` - -# Functions: -function install_kubernetes(){ - echo Running the Kubernetes installer... - - # Update apt and get dependencies - sudo apt-get update - sudo apt-get install -y unzip curl wget jq - - # Install docker and K8s - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - - sudo tee /etc/apt/sources.list.d/kubernetes.list </dev/null - deb http://apt.kubernetes.io/ kubernetes-xenial main -EOF - sudo apt-get update - # Install docker if you don't have it already. - sudo apt-get install -y docker.io - sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni -} - -function get_machine_ip(){ - ifconfig | grep -oP "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | sort | tail -n 1 | head -n 1 -} - -function update_hosts(){ - sudo sed -i "/$hostname/ s/.*/$machineip\t$hostname/g" /etc/hosts -} - -function setup_k8s_minion(){ - sudo kubeadm join --token=$token $masterip -} - -function join_cni_network(){ - sudo route add $clusterip gw $masterip -} - -function setup_openebs_flexvolumes() { - K8S_VOL_PLUGINDIR="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/" - sudo mkdir -p ${K8S_VOL_PLUGINDIR} - - ## Install the plugin for dedicated openebs-iscsi storage - sudo mkdir -p ${K8S_VOL_PLUGINDIR}/openebs~openebs-iscsi - wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/lib/plugin/flexvolume/openebs-iscsi - chmod +x openebs-iscsi - sudo mv openebs-iscsi ${K8S_VOL_PLUGINDIR}/openebs~openebs-iscsi/ - - ## Restart the kubelet for the new volume plugins to take effect - sudo systemctl restart kubelet.service -} - -function show_help() { - cat << EOF - Usage : $(basename "$0") --masterip= --token= --clusterip= - Create a Kubernetes Minion Node and Join the cluster. - - -h|--help Display this help and exit. - -i|--masterip Kubemaster IP IP of kubemaster to join the cluster. - -t|--token Token Token generated by kubeadm init. - -c|--clusterip Cluster IP ClusterIP of kubernetes to join CNI network. -EOF -} - -# Code: -# Check whether we received the User and Master hostnames else Show usage -# Uses the long form of arguments now - -if (($# == 0)); then - show_help - exit 2 -fi - -while :; do - case $1 in - -h|-\?|--help) # Call a "show_help" function to display a synopsis, then exit. - show_help - exit - ;; - - -i|--masterip) # Takes an option argument, ensuring it has been specified. - if [ -n "$2" ]; then - masterip=$2 - shift - else - printf 'ERROR: "--masterip" requires a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --masterip=?*) # Delete everything up to "=" and assign the remainder. - masterip=${1#*=} - ;; - - --masterip=) # Handle the case of an empty --masterip= - printf 'ERROR: "--masterip" requires a non-empty option argument.\n' >&2 - exit 1 - ;; - - -t|--token) # Takes an option argument, ensuring it has been specified. - if [ -n "$2" ]; then - token=$2 - shift - else - printf 'ERROR: "--token" requires a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --token=?*) # Delete everything up to "=" and assign the remainder. - token=${1#*=} - ;; - - --token=) # Handle the case of an empty --token= - printf 'ERROR: "--token" requires a non-empty option argument.\n' >&2 - exit 1 - ;; - - -c|--clusterip) # Takes an option argument, ensuring it has been specified. - if [ -n "$2" ]; then - clusterip=$2 - shift - else - printf 'ERROR: "--clusterip" requires a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --clusterip=?*) # Delete everything up to "=" and assign the remainder. - clusterip=${1#*=} - ;; - - --clusterip=) # Handle the case of an empty --clusterip= - printf 'ERROR: "--clusterip" requires a non-empty option argument.\n' >&2 - exit 1 - ;; - - --) # End of all options. - shift - break - ;; - - -?*) - printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2 - ;; - - *) # Default case: If no more options then break out of the loop. - break - esac -shift -done - -if [ -z "$masterip" ]; then - echo "MasterIP is mandatory." - show_help - exit -fi - -if [ -z "$token" ]; then - echo "Token is mandatory." - show_help - exit -fi - -if [ -z "$clusterip" ]; then - echo "ClusterIP is mandatory." - show_help - exit -fi - -#Get the machine ip, master ip, cluster ip and the token from the master -machineip=`get_machine_ip` - -#Install Kubernetes components -echo Installing Kubernetes on Minion... -install_kubernetes - -#Update the host files of the master and minion. -echo Updating the host files... -update_hosts - -#Join the cluster -echo Setting up the Minion using IPAddress: $machineip -echo Setting up the Minion using Token: $token -setup_k8s_minion - -#Add route to the minion ip to the cluster ip -echo Joining the CNI Network... -join_cni_network - -#Install the OpenEBS FlexVolume Plugins -setup_openebs_flexvolumes - diff --git a/k8s/demo/scripts/setup_k8s_master.sh b/k8s/demo/scripts/setup_k8s_master.sh deleted file mode 100755 index 26dbc3bcb4..0000000000 --- a/k8s/demo/scripts/setup_k8s_master.sh +++ /dev/null @@ -1,89 +0,0 @@ -#!/bin/bash - -#Variables: -machineip= -hostname=`hostname` - -#Functions: -function install_kubernetes(){ - echo Running the Kubernetes installer... - - # Update apt and get dependencies - sudo apt-get update - sudo apt-get install -y unzip curl wget - - # Install docker and K8s - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - - sudo tee /etc/apt/sources.list.d/kubernetes.list </dev/null - deb http://apt.kubernetes.io/ kubernetes-xenial main -EOF - sudo apt-get update - # Install docker if you don't have it already. - sudo apt-get install -y docker.io - sudo apt-get install -y --allow-unauthenticated kubelet kubeadm kubectl kubernetes-cni - - #Install JSON Parser for patching kube-proxy - sudo apt-get install -y jq -} - -function get_machine_ip(){ - ifconfig | grep -oP "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | sort | tail -n 1 | head -n 1 -} - -function setup_k8s_master(){ - #sudo kubeadm init --apiserver-advertise-address=$machineip - sudo kubeadm init --apiserver-advertise-address=$machineip - sudo kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f https://git.io/weave-kube -} - -function update_hosts(){ - sudo sed -i "/$hostname/ s/.*/$machineip\t$hostname/g" /etc/hosts -} - -function patch_kube_proxy(){ - sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf -n kube-system get ds -l 'component=kube-proxy' -o json | \ - jq '.items[0].spec.template.spec.containers[0].command |= .+ ["--proxy-mode=userspace"]' | \ - sudo kubectl apply --kubeconfig=/etc/kubernetes/admin.conf -f - && sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf -n kube-system delete pods -l 'component=kube-proxy' -} - -function download_specs(){ - - specurl="https://api.github.com/repos/openebs/openebs/contents/k8s/demo/specs" - mapfile -t downloadurls < <(curl -sS $specurl | grep "download_url" | awk '{print $2}' | tr -d '",') - - #Create demo directory and download specs - mkdir -p /home/ubuntu/demo/k8s/spec - cd /home/ubuntu/demo/k8s/spec - - length=${#downloadurls[@]} - for ((i = 0; i != length; i++)); do - if [ -z "${downloadurls[i]##*yaml*}" ] ;then - wget "${downloadurls[i]}" - fi - done - -} - -#Code -#Get the ip of the machine -machineip=`get_machine_ip` - -#Install Kubernetes components -echo Installing Kubernetes on Master... -install_kubernetes - -#Update the host file of the master. -echo Updating the host file... -update_hosts - -#Create the Cluster -echo Setting up the Master using IPAddress: $machineip -setup_k8s_master - -#Patching kube-proxy to run with --proxy-mode=userspace -echo Patching the kube-proxy for CNI Networks... -patch_kube_proxy - -#Download the specs for the demo -echo Downloading samples for demo... -download_specs diff --git a/k8s/demo/scripts/setup_omm.sh b/k8s/demo/scripts/setup_omm.sh deleted file mode 100755 index da5f2fca77..0000000000 --- a/k8s/demo/scripts/setup_omm.sh +++ /dev/null @@ -1,117 +0,0 @@ -#!/bin/bash - -# Variables: -machineip= -releasetag= - -# Functions: -function get_machine_ip(){ - ifconfig | grep -oP "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | sort | tail -n 1 | head -n 1 -} - -function install_omm(){ - releaseurl="https://api.github.com/repos/openebs/maya/releases" - # Update apt and get dependencies - sudo apt-get update - sudo apt-get install -y unzip curl wget - - # Install Maya binaries - if [ -z "$releasetag" ]; then - wget $(curl -sS $releaseurl | grep "browser_download_url" | awk '{print $2}' | tr -d '"' | head -n 2 | tail -n 1) - else - wget https://github.com/openebs/maya/releases/download/$releasetag/maya-linux_amd64.zip - fi - unzip maya-linux_amd64.zip - sudo mv maya /usr/bin - rm -rf maya-linux_amd64.zip -} - -function setup_omm(){ - maya setup-omm -self-ip=$machineip -} - -function show_help() { - cat << EOF - Usage : $(basename "$0") --releasetag=[OpenEBS Maya Release Version] - Installs the OpenEBS Maya Version - - -h|--help Display this help and exit. - -r|--release Maya Release Version IP of kubemaster to join the cluster. -EOF -} - -function download_specs(){ - - specurl="https://api.github.com/repos/openebs/openebs/contents/k8s/demo/specs" - mapfile -t downloadurls < <(curl -sS $specurl | grep "download_url" | awk '{print $2}' | tr -d '",') - - #Create demo directory and download specs - mkdir -p /home/ubuntu/demo/maya/spec - cd /home/ubuntu/demo/maya/spec - - length=${#downloadurls[@]} - for ((i = 0; i != length; i++)); do - if [ -z "${downloadurls[i]##*hcl*}" ] ;then - wget "${downloadurls[i]}" - fi - done - -} - -# Code: - -while :; do - case $1 in - -h|-\?|--help) # Call a "show_help" function to display a synopsis, then exit. - show_help - exit - ;; - - -r|--releasetag) # Takes an option argument, ensuring it has been specified. - if [ -n "$2" ]; then - releasetag=$2 - shift - else - printf 'ERROR: "--releasetag" requires a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --releasetag=?*) # Delete everything up to "=" and assign the remainder. - releasetag=${1#*=} - ;; - - --releasetag=) # Handle the case of an empty --masterip= - printf 'ERROR: "--releasetag" requires a non-empty option argument.\n' >&2 - exit 1 - ;; - - --) # End of all options. - shift - break - ;; - - -?*) - printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2 - ;; - - *) # Default case: If no more options then break out of the loop. - break - esac -shift -done - -#Get the ip of the machine -machineip=`get_machine_ip` - -#Install OpenEBS Maya Components -echo Installing OpenEBS on Master... -install_omm - -#Create the Cluster -echo Setting up the Master using IPAddress: $machineip -setup_omm - -#Download the specs for the demo -echo Downloading samples for demo... -download_specs diff --git a/k8s/demo/scripts/setup_osh.sh b/k8s/demo/scripts/setup_osh.sh deleted file mode 100755 index 0015d9ddb5..0000000000 --- a/k8s/demo/scripts/setup_osh.sh +++ /dev/null @@ -1,133 +0,0 @@ -#!/bin/bash - -# Variables: -machineip= -masterip= -releasetag= - -# Functions: -function get_machine_ip(){ - ifconfig | grep -oP "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | sort | tail -n 1 | head -n 1 -} - -function install_osh(){ - # Update apt and get dependencies - sudo apt-get update - sudo apt-get install -y unzip curl wget - - # Install Maya binaries - if [ -z "$releasetag" ]; then - wget $(curl -sS $releaseurl | grep "browser_download_url" | awk '{print $2}' | tr -d '"' | head -n 2 | tail -n 1) - else - wget https://github.com/openebs/maya/releases/download/$releasetag/maya-linux_amd64.zip - fi - unzip maya-linux_amd64.zip - sudo mv maya /usr/bin - rm -rf maya-linux_amd64.zip -} - -function setup_osh(){ - maya setup-osh -self-ip=$machineip -omm-ips=$masterip -} - -function prepare_osh() { - sudo docker pull openebs/jiva:latest -} - -function show_help() { - cat << EOF - Usage : $(basename "$0") --releasetag=[OpenEBS Maya Release Version] - Installs the OpenEBS Maya Version - - -h|--help Display this help and exit. - -r|--release Maya Release Version IP of kubemaster to join the cluster. -EOF -} - -# Code: - -if (($# == 0)); then - show_help - exit 2 -fi - -while :; do - case $1 in - -h|-\?|--help) # Call a "show_help" function to display a synopsis, then exit. - show_help - exit - ;; - - -i|--masterip) # Takes an option argument, ensuring it has been specified. - if [ -n "$2" ]; then - masterip=$2 - shift - else - printf 'ERROR: "--masterip" requires a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --masterip=?*) # Delete everything up to "=" and assign the remainder. - masterip=${1#*=} - ;; - - --masterip=) # Handle the case of an empty --masterip= - printf 'ERROR: "--masterip" requires a non-empty option argument.\n' >&2 - exit 1 - ;; - - -r|--releasetag) # Takes an option argument, ensuring it has been specified. - if [ -n "$2" ]; then - releasetag=$2 - shift - else - printf 'ERROR: "--token" requires a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --releasetag=?*) # Delete everything up to "=" and assign the remainder. - releasetag=${1#*=} - ;; - - --releasetag=) # Handle the case of an empty --token= - printf 'ERROR: "--releasetag" requires a non-empty option argument.\n' >&2 - exit 1 - ;; - - --) # End of all options. - shift - break - ;; - - -?*) - printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2 - ;; - - *) # Default case: If no more options then break out of the loop. - break - esac -shift -done - -if [ -z "$masterip" ]; then - echo "MasterIP is mandatory." - show_help - exit -fi - -#Get the ip of the machine -machineip=`get_machine_ip` - -#Install OpenEBS Maya Components -echo Installing OpenEBS on Host... -install_osh - -#Join the Cluster -echo Setting up the Host using IPAddress: $machineip -setup_osh - -#Prepare for VSMs -echo Downloading latest VSM image -prepare_osh diff --git a/k8s/demo/vdbench/demo-vdbench-openebs.yaml b/k8s/demo/vdbench/demo-vdbench-openebs.yaml deleted file mode 100644 index 66084af8f9..0000000000 --- a/k8s/demo/vdbench/demo-vdbench-openebs.yaml +++ /dev/null @@ -1,35 +0,0 @@ ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: vdbench -spec: - template: - metadata: - name: vdbench - spec: - restartPolicy: Never - containers: - - name: perfrunner - image: openebs/tests-vdbench - volumeMounts: - - mountPath: /datadir1 - name: demo-vol1 - volumes: - - name: demo-vol1 - persistentVolumeClaim: - claimName: demo-vol1-claim ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-vol1-claim -spec: - storageClassName: openebs-standard - accessModes: - - ReadWriteOnce - resources: - requests: - storage: "5G" - - diff --git a/k8s/jiva/README.md b/k8s/jiva/README.md deleted file mode 100644 index 088abec16d..0000000000 --- a/k8s/jiva/README.md +++ /dev/null @@ -1,102 +0,0 @@ -This document will help to delete the auto-generated snapshots created during Jiva replica restart or when a new replica is added to the Jiva controller. The steps for deleting the auto-generated snapshots are as follows: - -Get the details of Jiva controller pod using the following command. It will show the Jiva pods details running in `default` namespace. - -``` -kubectl get pod -n default -``` -Example output: -``` -NAME READY STATUS RESTARTS AGE -percona-7b64956695-kd9q4 1/1 Running 0 105s -pvc-d01e90d9-a921-11e9-93c2-42010a8000ab-ctrl-df9c749cf-jp6mg 2/2 Running 0 104s -pvc-d01e90d9-a921-11e9-93c2-42010a8000ab-rep-795d8c5cb8-4wb9c 1/1 Running 0 101s -pvc-d01e90d9-a921-11e9-93c2-42010a8000ab-rep-795d8c5cb8-gfxg9 1/1 Running 0 101s -pvc-d01e90d9-a921-11e9-93c2-42010a8000ab-rep-795d8c5cb8-n4cfq 1/1 Running 1 101s -``` - -List all internal snapshots created inside corresponding Jiva controller using the following command. -``` -kubectl exec -it -n jivactl snapshot ls -``` - -For Example: -``` -kubectl exec -it pvc-d01e90d9-a921-11e9-93c2-42010a8000ab-ctrl-df9c749cf-jp6mg -n default jivactl snapshot ls -``` - -Example output: -``` -ID -f1b68e2a-5a3d-4737-85d0-b34c1452db7c -1e5441ff-ec75-4618-a5f0-d5de25eca1b2 -4ec87701-6faf-4c72-816b-d81885c67263 -02617eeb-2147-4adf-8e6b-0317c7fad79d -fb1bac27-bd46-41be-831a-12ebe5421d23 -c4556aff-6da2-4fb3-ba8c-a0d7bfad67bb -1bb0cf11-1a6c-45d4-8638-daac561baf0d -b9261581-6713-45cb-a87f-bafefa2fd6ee -c80150ac-f3c2-4c3a-a289-138a80dc4e0d -ef478c62-22da-4045-abaf-7f08b68c5696 -bf4e562b-61e4-4bbf-87cc-7026e4b7bb7f -9f40b8df-2641-4502-b451-979c97e73392 -53cf1bbe-a2a1-430a-bd20-50528fed6a32 -c61e9c1f-64d2-48f0-866e-a9cc1820bf7a -d969a089-e125-4189-a5e8-922c9f5fd48b -529ea5b9-2b03-4524-91e0-fda623365e88 -cb8fd2da-c132-487f-85c9-5ac5189a5cda -cd0a5074-be77-4e3b-b141-e91a6fba94d3 -277aacdb-3f24-4203-8206-d871294a9292 -4231e0aa-65ea-4f86-81e5-db29930b61b7 -``` - -Now exit from the container using `exit` command. - -Download the files for deleting Jiva snapshots from Jiva repository using the following commands. -``` -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/jiva/patch.json -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/jiva/snapshot-cleanup.sh -``` -Ensure that `snapshot-cleanup.sh` has execute permission. If not, make it executable by running `chmod +x snapshot-cleanup.sh` from the downloaded folder. - -Now get the PV name using the following command. -``` -kubectl get pv | grep -``` -Example: -``` -kubectl get pv | grep demo-vol1-claim -``` - -Example Output: -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-d01e90d9-a921-11e9-93c2-42010a8000ab 5G RWO Delete Bound default/demo-vol1-claim openebs-jiva-default 10m -``` - -The following script will execute when number of Jiva auto-generated snapshots are more than 5. Else, it will exit. This means, minimum available snapshot will be always more than 5. For example, if total number of Jiva auto-generated snapshots are 20 and required number of snapshot to be deleted is 15, then this script will not work. In this case, the script will work if you give 14 instead of 15 as the number of snapshots to be deleted. - -**Note:** - -Snapshot cleanup process involves disconnecting the application from its storage. Ensure that the application is not being used and the connectivity to the Kubernetes Cluster is active while performing the snapshot cleanup process. To ensure that the application is not connected to the storage, it is recommended to scale down the application. - -Delete the auto-generated internal snapshots using the following command. - -``` -./ snapshot-cleanup.sh -``` - -Example: - -``` -./snapshot-cleanup.sh pvc-d01e90d9-a921-11e9-93c2-42010a8000ab 12 -``` - -In the above example, 12 snapshots will be deleted from the total number of the auto-generated snapshots of the volume. After deleting mentioned number of snapshots, total number of auto generated snapshot will be (old total number- given number for deletion) + (total number of replica -1). In this example, (21-12)+2=11 - -**Note:** In case of unexpected disconnect during the cleanup process, you will have to run the following command to restore the volume service. -``` -./snapshot-cleanup.sh restore_service -``` - -You can use the steps described above to list the current snapshots available on jiva. diff --git a/k8s/jiva/patch.json b/k8s/jiva/patch.json deleted file mode 100644 index 352f7c35ab..0000000000 --- a/k8s/jiva/patch.json +++ /dev/null @@ -1,24 +0,0 @@ -{ - "spec": { - "ports": [ - { - "name": "iscsi", - "port": 50000, - "protocol": "TCP", - "targetPort": 50000 - }, - { - "name": "api", - "port": 9501, - "protocol": "TCP", - "targetPort": 9501 - }, - { - "name": "exporter", - "port": 9500, - "protocol": "TCP", - "targetPort": 9500 - } - ] - } - } \ No newline at end of file diff --git a/k8s/jiva/snapshot-cleanup.sh b/k8s/jiva/snapshot-cleanup.sh deleted file mode 100755 index 50baec951c..0000000000 --- a/k8s/jiva/snapshot-cleanup.sh +++ /dev/null @@ -1,143 +0,0 @@ -#!/bin/bash -usage() -{ - echo "Usage: ./ snapshot-cleanup.sh " - exit 1 -} - -warning() -{ - echo "WARNING: Snapshot cleanup involves disconnecting the application from the storage. Also, while the snapshot cleanup is in progress - you will need to ensure that the connectivity to the Kubernetes Clusters is active. In case of unexpected disconnect, you will have to run the following command to restore the volume service. snapshot-cleanup.sh restore_service." - echo "Do you want to continue (Y/N)" - read access - if [ $access == "N" ] || [ $access == "n" ]; then - exit 1; - fi -} - -# spin prints ('|','/','-','\','|') in cyclic order while snapshot deletion is in progress. -spin() -{ - while true - do - stat /proc/$pid > /dev/null - if [ $? -ne 0 ] - then - break - fi - printf "\b${sp:i++%${#sp}:1}" - sleep 1 - done -} - -# delete_jiva_snapshot deletes jiva-snapshots(based on the arguments passed for number of snapshots to be deleted) using jivactl -delete_jiva_snapshot() -{ - warning - min_required_snapshot=4 - count=0 - - snapshot_list_cmd=$(kubectl exec -it $ctrl_pod_name -n $pvc_namespace -- jivactl snapshot ls) - if [ "$?" != "0" ]; then - exit 1; - fi - - snapshot_number_cmd="kubectl exec -it $ctrl_pod_name -n $pvc_namespace -- jivactl snapshot ls | grep -v ID | wc -l" - snapshot_name_cmd="kubectl exec -it $ctrl_pod_name -n $pvc_namespace -- jivactl snapshot ls | grep -v ID | tail -1" - - if [ $(eval $snapshot_number_cmd) -lt $min_required_snapshot ]; then - echo "Error: You can initiate snapshot deletion only when the volume has more than $min_required_snapshot snapshots. There are only $(eval $snapshot_number_cmd) snapshots at the moment. You need not cleanup any more snapshots." - exit 1; - fi - - if [ $(eval $snapshot_number_cmd) -le $number_of_snapshot ]; then - echo "Error: You have requested to delete $number_of_snapshot. There are only $(($(eval $snapshot_number_cmd) - $min_required_snapshot)) snapshot available that can be deleted on this volume. Please re-run this command with by specifying the number of snapshots to be deleted as $(($(eval $snapshot_number_cmd) - $min_required_snapshot)) or less." - exit 1 ; - else - if [ $(eval $snapshot_number_cmd) -ge $(($min_required_snapshot + $number_of_snapshot)) ]; then - block_service - echo "Deleting snapshots" - delete_snap > /dev/null 2&>1 & - pid=$! - spin - cat log.txt - unblock_service - rm -rf log.txt - else - echo "Error: You have requested to delete $number_of_snapshot. There are only $(($(eval $snapshot_number_cmd) - $min_required_snapshot)) snapshot available that can be deleted on this volume. Please re-run this command with by specifying the number of snapshots to be deleted as $(($(eval $snapshot_number_cmd) - $min_required_snapshot)) or less." ; - fi - fi -} - -# block_service changes target port value with different value -# To restrict io's to happen -block_service() -{ - kubectl patch svc $ctrl_service_name -n $pvc_namespace --type merge -p "$(cat tmp.json)" -} - - -# unblock_service restores the actual target port value to allow io's -unblock_service() -{ - restart_ctrl - sed -i 's/50000/3260/g' tmp.json - kubectl patch svc $ctrl_service_name -n $pvc_namespace --type merge -p "$(cat tmp.json)" - rm tmp.json -} - -# restart_ctrl restart the controller pod -restart_ctrl() -{ - kubectl delete pod $ctrl_pod_name -n $pvc_namespace -} - -validate_snap_delete() -{ - del_snapshot_name=$1 - validate_cmd="kubectl exec -it $ctrl_pod_name -n $pvc_namespace -- jivactl snapshot ls | grep $del_snapshot_name" - eval $validate_cmd - if [ $? == "0" ]; then - echo "Unable to delete $del_snapshot_name" ; - unblock_service ; - exit 1; - fi -} - -delete_snap() -{ - while [ $count -lt $number_of_snapshot ] - do - snapshot_name=$(eval $snapshot_name_cmd | tr -d '\r') - del_cmd="kubectl exec -it $ctrl_pod_name -n $pvc_namespace -- jivactl snapshot rm $snapshot_name" - eval $($del_cmd >> log.txt) - count=$((count+1)) - validate_snap_delete $snapshot_name - done -} - -i=1 -sp="/-\|" -echo -n ' ' -touch log.txt -rm -rf tmp.json -cat patch.json > tmp.json -pv_name=$1 -number_of_snapshot=$2 -pvc_namespace=$(kubectl get pv $pv_name -o jsonpath='{.spec.claimRef.namespace}') -ctrl_pod_name=$(kubectl get pod -n $pvc_namespace -l openebs.io/persistent-volume=$pv_name,openebs.io/controller=jiva-controller -o jsonpath='{.items[0].metadata.name}') -ctrl_service_name=$(kubectl get service -n $pvc_namespace -l openebs.io/controller-service=jiva-controller-svc,openebs.io/persistent-volume=$pv_name -o jsonpath='{.items[0].metadata.name}') - - -if [ $# -ne 2 ]; then - usage -elif [ $# -eq 2 ] && [ $2 == "restore_service" ]; then - unblock_service ; - exit 1 -elif [ $# -eq 2 ] && [ $2 != "restore_service" ]; then - if [ $2 -ge 0 2>/dev/null ]; then - delete_jiva_snapshot $1 $2 ; - else - usage - fi -fi \ No newline at end of file diff --git a/k8s/lib/scripts/configure_k8s_cni.sh b/k8s/lib/scripts/configure_k8s_cni.sh deleted file mode 100755 index 9a4d172f71..0000000000 --- a/k8s/lib/scripts/configure_k8s_cni.sh +++ /dev/null @@ -1,56 +0,0 @@ -#!/bin/bash -kubeversion=`kubectl version --short | grep 'Server Version' | awk {'print $3'}` -kuberegex='^v1[.][0-8][.][0-9][0-9]?$' - -function patch_kube_proxy(){ - kubectl -n kube-system get ds -l 'k8s-app=kube-proxy' -o json \ - | jq '.items[0].spec.template.spec.containers[0].command |= .+ ["--proxy-mode=userspace"]' \ - | kubectl apply -f - \ - && kubectl -n kube-system delete pods -l 'k8s-app=kube-proxy' -} - -function setup_k8s_weave() { - kubectl apply -f $HOME/setup/cni/weave/weave-daemonset-k8s-1.6.yaml - - if [[ $? -ne 0 ]]; then - - kubectl delete -f $HOME/setup/cni/weave/weave-daemonset-k8s-1.6.yaml - - export kubever=$(kubectl version | base64 | tr -d '\n') - kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" - - if [[ $? -ne 0 ]]; then - echo "Unable to apply the Pod Network. SSH into the master and apply a Pod Network for your Cluster." - fi - - fi -} - -function setup_k8s_kuberouter(){ - kubectl apply -f $HOME/setup/cni/kuberouter/kubeadm-kuberouter.yaml - - if [[ $? -ne 0 ]]; then - - kubectl delete -f $HOME/setup/cni/kuberouter/kubeadm-kuberouter.yaml - - kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml - - if [[ $? -ne 0 ]]; then - echo "Unable to apply the Pod Network. SSH into the master and apply a Pod Network for your Cluster." - fi - fi -} - -#Patching kube-proxy to run with --proxy-mode=userspace -echo Patching the kube-proxy for CNI Networks... -patch_kube_proxy - -[[ $kubeversion =~ $kuberegex ]] - -if [[ $? -eq 1 ]]; then - echo Configure Pod Network with Kuberouter - setup_k8s_kuberouter -else - echo Configure Pod Network with Weave - setup_k8s_weave -fi diff --git a/k8s/lib/scripts/configure_k8s_cred.sh b/k8s/lib/scripts/configure_k8s_cred.sh deleted file mode 100755 index 530865351c..0000000000 --- a/k8s/lib/scripts/configure_k8s_cred.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/bin/bash - -function setup_k8s_cred() { - mkdir -p $HOME/.kube - sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config - sudo chown $(id -u):$(id -g) $HOME/.kube/config - export KUBECONFIG=$HOME/.kube/config -} - -#Copy the k8s credentials to $HOME -echo Copy the k8s credentials to $HOME -setup_k8s_cred -echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.profile diff --git a/k8s/lib/scripts/configure_k8s_dashboard.sh b/k8s/lib/scripts/configure_k8s_dashboard.sh deleted file mode 100755 index b35085babd..0000000000 --- a/k8s/lib/scripts/configure_k8s_dashboard.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/bin/bash - -function setup_k8s_dashboard() { - kubectl apply -n kube-system -f $HOME/setup/dashboard/kubernetes-dashboard-1.6.3.yaml -} - - -echo Configure Kubernetes Dashboard -setup_k8s_dashboard - diff --git a/k8s/lib/scripts/configure_k8s_host.sh b/k8s/lib/scripts/configure_k8s_host.sh deleted file mode 100755 index b7a70a7c5e..0000000000 --- a/k8s/lib/scripts/configure_k8s_host.sh +++ /dev/null @@ -1,206 +0,0 @@ -#!/bin/bash - -# Note:This script assumes the user has the permission to ssh into the master machine. -# Variables: -machineip= -masterip= -tokensha= -token= -hostname=`hostname` -kubeversion=`sudo kubeadm version -o short` -kuberegex='^v1[.][0-7][.][0-9][0-9]?$' - -function get_machine_ip(){ - ip addr show \ - | grep -oP "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | sort \ - | tail -n 1 | head -n 1 -} - -function update_hosts(){ - sudo sed -i "/$hostname/ s/.*/$machineip\t$hostname/g" /etc/hosts -} - -function setup_k8s_minion(){ - - [[ $kubeversion =~ $kuberegex ]] - # For versions 1.8 and above, discovery token SHA is necessary to join the master - if [[ $? -eq 1 ]]; then - echo Setting up the Minion using Discovery Token SHA: $tokensha - - sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X - - sudo kubeadm join --token=$token ${masterip}:6443 --discovery-token-ca-cert-hash sha256:${tokensha} - else - - sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X - - sudo kubeadm join --token=$token ${masterip}:6443 - fi - -} - -function disable_swap() -{ - sudo swapoff -a - - sudo sed -i.bak '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab - - cat < /dev/null - Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" -EOF - sudo systemctl daemon-reload - sudo systemctl restart kubelet -} - -function show_help() { - cat << EOF - Usage : $(basename "$0") --masterip= --token= --token-sha= - Create a Kubernetes Minion Node and Join the cluster. - - -h|--help Display this help and exit. - -i|--masterip Kubemaster IP IP of kubemaster to join the cluster. - -t|--token Token Token generated by kubeadm init. - -s|--token-sha SHA Discovery Token SHA for the Cluster. -EOF -} - -# Code: -# Check whether we received the User and Master hostnames else Show usage -# Uses the long form of arguments now - -if (($# == 0)); then - show_help - exit 2 -fi - -while :; do - case $1 in - -h|-\?|--help) # Call a "show_help" function to - # display a synopsis, then exit. - show_help - exit - ;; - - -i|--masterip) # Takes an option argument, - # ensuring it has been specified. - if [ -n "$2" ]; then - masterip=$2 - shift - else - printf 'ERROR: "--masterip" requires \ - a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --masterip=?*) # Delete everything up to "=" - # and assign the remainder. - masterip=${1#*=} - ;; - - --masterip=) # Handle the case of an empty --masterip= - printf 'ERROR: "--masterip" requires \ - a non-empty option argument.\n' >&2 - exit 1 - ;; - - -t|--token) # Takes an option argument, - # ensuring it has been specified. - if [ -n "$2" ]; then - token=$2 - shift - else - printf 'ERROR: "--token" requires \ - a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --token=?*) # Delete everything up to "=" - # and assign the remainder. - token=${1#*=} - ;; - - --token=) # Handle the case of an empty --token= - printf 'ERROR: "--token" requires \ - a non-empty option argument.\n' >&2 - exit 1 - ;; - - -s|--token-sha) # Takes an option argument, - # ensuring it has been specified. - if [ -n "$2" ]; then - tokensha=$2 - shift - else - printf 'ERROR: "--clusterip" requires \ - a non-empty option argument.\n' >&2 - exit 1 - fi - ;; - - --token-sha=?*) # Delete everything up to "=" - # and assign the remainder. - tokensha=${1#*=} - ;; - - --token-sha=) # Handle the case of an empty --token-sha= - printf 'ERROR: "--token-sha" requires \ - a non-empty option argument.\n' >&2 - exit 1 - ;; - - --) # End of all options. - shift - break - ;; - - -?*) - printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2 - ;; - - *) # Default case: If no more options - # then break out of the loop. - break - esac -shift -done - -if [ -z "$masterip" ]; then - echo "MasterIP is mandatory." - show_help - exit -fi - -if [ -z "$token" ]; then - echo "Token is mandatory." - show_help - exit -fi - -if [ -z "$tokensha" ]; then - echo "Discovery Token SHA is mandatory." - show_help - exit -fi - -#Get the machine ip, master ip, cluster ip and the token from the master -machineip=`get_machine_ip` - -#Update the host files of the master and minion. -echo Updating the host files... -update_hosts - -[[ $kubeversion =~ $kuberegex ]] -# For versions 1.8 and above, swap needs to be disabled -if [[ $? -eq 1 ]]; then - #Disable swap for Kubernetes 1.8 and above - echo Disable swap - disable_swap -fi - -#Join the cluster -echo Setting up the Minion using IPAddress: $machineip -echo Setting up the Minion using Token: $token -setup_k8s_minion diff --git a/k8s/lib/scripts/configure_k8s_master.sh b/k8s/lib/scripts/configure_k8s_master.sh deleted file mode 100755 index fb03403321..0000000000 --- a/k8s/lib/scripts/configure_k8s_master.sh +++ /dev/null @@ -1,74 +0,0 @@ -#!/bin/bash - -#Variables: -machineip= -hostname=`hostname` -kubeversion="v1.7.5" -kuberegex='^v1.[0-7].[0-9][0-9]?$' -kubecniregex='^v1[.][0-8][.][0-9][0-9]?$' - -function get_machine_ip(){ - ip addr show | \ - grep -oP "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" | sort |\ - tail -n 1 | head -n 1 -} - -function setup_k8s_master() { - - # HEPTIO Pro Tip - # Flush iptables for any residue left behind by kubeadm reset - sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X - - # Releases the port 10251, which causes the pre-flight checks to fail. - # Kubeadm init, will start the kubelet if it is not running. - sudo systemctl stop kubelet - - [[ $kubeversion =~ $kubecniregex ]] - - if [[ $? -eq 1 ]]; then - # Use Kuberouter Pod Network for now for version 1.9.0 and above - sudo kubeadm init --apiserver-advertise-address=$machineip \ - --kubernetes-version=$kubeversion --pod-network-cidr 10.1.0.0/16 - else - sudo kubeadm init --apiserver-advertise-address=$machineip \ - --kubernetes-version=$kubeversion - fi -} - -function update_hosts(){ - sudo sed -i "/$hostname/ s/.*/$machineip\t$hostname/g" /etc/hosts -} - -function disable_swap() -{ - sudo swapoff -a - - sudo sed -i.bak '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab - - cat < /dev/null - Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" -EOF - sudo systemctl daemon-reload - sudo systemctl restart kubelet -} - -#Code -#Get the ip of the machine -machineip=`get_machine_ip` - -#Update the host file of the master. -echo Updating the host file... -update_hosts - -[[ $kubeversion =~ $kuberegex ]] -# For versions 1.8 and above, swap needs to be disabled -if [[ $? -eq 1 ]]; then - #Disable swap for Kubernetes 1.8 and above - echo Disable swap - disable_swap -fi - -#Create the Cluster -echo Setting up the Master using IPAddress: $machineip -setup_k8s_master diff --git a/k8s/lib/vagrant/README.md b/k8s/lib/vagrant/README.md deleted file mode 100644 index 6c65018929..0000000000 --- a/k8s/lib/vagrant/README.md +++ /dev/null @@ -1,25 +0,0 @@ -This directory hosts the Vagrantfiles and configuration scripts required for creating vagrant boxes (sandboxes) with specific versions of kubernetes, openebs etc., - -The directory/file structure is organized as follows: - -``` -|---Vagrantfile ( Creates the box. Refers to the scripts under boxes ) -| -|---boxes ( configuration scripts for install and pre-packaging required s/w ) -| -|---tests ( contains the Vagrantfiles, that will test the box functionality. ) -| -``` - -The configuration scripts under the boxes will typically perform the following: -- Start with a Base Operating System (like Ubuntu 16.04 box) -- Download the required packages (like kuberentes, docker, kubeadm ) -- Download the required docker images that will be later used by kubeadm for configuring the cluster -- Download the post-vm boot configuration scripts -- Download the demo yaml spec files. - -OpenEBS repository also hosts (in different directory), the scripts required for post-vm initialization tasks like calling "kubeadm join" with required parameters. Similarly, the sample k8s pod specs are also provided. These scripts and specs are pre-packaged into spec files into setup and demo directories respectively. - -- configuration scripts are at [k8s/lib/scripts](https://github.com/openebs/openebs/tree/master/k8s/lib/scripts) -- demo yaml files are at [k8s/demo](https://github.com/openebs/openebs/tree/master/k8s/demo) - diff --git a/k8s/lib/vagrant/Vagrantfile b/k8s/lib/vagrant/Vagrantfile deleted file mode 100644 index 0bcecf03bb..0000000000 --- a/k8s/lib/vagrant/Vagrantfile +++ /dev/null @@ -1,189 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# All Vagrant configuration is done below. The "2" in Vagrant.configure -# configures the configuration version (we support older styles for -# backwards compatibility). Please don't change it unless you know what -# you're doing. - -BOX_MODE_OPENEBS = 1 -BOX_MODE_KUBERNETES = 2 - -box_Mode=ENV['OPENEBS_BUILD_BOX'] || 2 - -kube_version=ENV['KUBE_VERSION'] || "1.7.5" - -distro=ENV['DISTRIBUTION'] || "ubuntu" - -docker=ENV['DOCKER'] || "docker-cs" - -required_plugins = %w(vagrant-vbguest) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - -Vagrant.configure("2") do |config| - # The most common configuration options are documented and commented below. - # For a complete reference, please see the online documentation at - # https://docs.vagrantup.com. - - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = true - end - - if ((box_Mode.to_i < BOX_MODE_OPENEBS.to_i) || (box_Mode.to_i > BOX_MODE_KUBERNETES.to_i)) - puts "Invalid value set for OPENEBS_BUILD_BOX." - puts "Usage: OPENEBS_BUILD_BOX=1 for OpenEBS." - puts "Usage: OPENEBS_BUILD_BOX=2 for Kubernetes." - puts "Defaulting to OpenEBS..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - - box_Mode = 1 - end - - # Every Vagrant development environment requires a box. You can search for - # boxes at https://atlas.hashicorp.com/search. - if(distro == "ubuntu") - config.vm.box = "ubuntu/xenial64" - else - config.vm.box = "centos/7" - end - # Disable automatic box update checking. If you disable this, then - # boxes will only be checked for updates when the user runs - # `vagrant box outdated`. This is not recommended. - config.vm.box_check_update = false - # Create a forwarded port mapping which allows access to a specific port - # within the machine from a port on the host machine. In the example below, - # accessing "localhost:8080" will access port 80 on the guest machine. - # config.vm.network "forwarded_port", guest: 80, host: 8080 - - # Create a private network, which allows host-only access to the machine - # using a specific IP. - # config.vm.network "private_network", ip: "192.168.33.10" - - # Create a public network, which generally matched to bridged network. - # Bridged networks make the machine appear as another physical device on - # your network. - # config.vm.network "public_network" - - # Share an additional folder to the guest VM. The first argument is - # the path on the host to the actual folder. The second argument is - # the path on the guest to mount the folder. And the optional third - # argument is a set of non-required options. - # config.vm.synced_folder "../data", "/vagrant_data" - - # Provider-specific configuration so you can fine-tune various - # backing providers for Vagrant. These expose provider-specific options. - # Example for VirtualBox: - # - # config.vm.provider "virtualbox" do |vb| - # # Display the VirtualBox GUI when booting the machine - # vb.gui = true - # - # # Customize the amount of memory on the VM: - # vb.memory = "1024" - # end - # - # View the documentation for the provider you are using for more - # information on available options. - - # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies - # such as FTP and Heroku are also available. See the documentation at - # https://docs.vagrantup.com/v2/push/atlas.html for more information. - # config.push.define "atlas" do |push| - # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME" - # end - - # Enable provisioning with a shell script. Additional provisioners such as - # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the - # documentation for more information about their specific syntax and use. - # config.vm.provision "shell", inline: <<-SHELL - # apt-get update - # apt-get install -y apache2 - # SHELL - if box_Mode.to_i == BOX_MODE_KUBERNETES.to_i - - config.vm.provision :shell, - path: "boxes/k8s/prepare_k8s.sh", - :args => "#{distro}", - privileged: true - - config.vm.provision :shell, - path: "boxes/k8s/fetch_kubeadm.sh", - :args => "#{kube_version} #{distro} #{docker}", - privileged: true - - config.vm.provision :shell, - path: "boxes/k8s/fetch_k8scontainers.sh", - :args => "#{kube_version}", - privileged: true - - config.vm.provision :shell, - path: "boxes/k8s-dashboard/fetch_dashboard.sh", - :args => "#{distro}", - privileged: true - - config.vm.provision :shell, - path: "boxes/k8s-weave/fetch_weave.sh", - :args => "#{distro}", - privileged: true - - config.vm.provision :shell, - path: "boxes/k8s-kuberouter/fetch_kuberouter.sh", - :args => "#{distro}", - privileged: true - - config.vm.provision :shell, - path: "boxes/k8s/cleanup_k8s.sh", - :args => "#{kube_version} #{distro}", - privileged: true - - elsif box_Mode.to_i == BOX_MODE_OPENEBS.to_i - - config.vm.provision :shell, - path: "boxes/openebs/prepare_openebs.sh", - privileged: true - - config.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/install_bootstrap.sh", - privileged: true - - config.vm.provision :shell, - inline: "/bin/bash /etc/maya.d/scripts/install_docker.sh", - privileged: true - - config.vm.provision :shell, - inline: "/bin/bash /etc/maya.d/scripts/install_consul.sh", - privileged: true - - config.vm.provision :shell, - inline: "/bin/bash /etc/maya.d/scripts/install_nomad.sh", - privileged: true - - config.vm.provision :shell, - inline: "/bin/bash /etc/maya.d/scripts/install_mayaserver.sh", - privileged: true - - config.vm.provision :shell, - path: "boxes/openebs/cleanup_openebs.sh", - privileged: true - - end -end diff --git a/k8s/lib/vagrant/boxes/k8s-dashboard/external/kubernetes-dashboard-1.6.3.yaml b/k8s/lib/vagrant/boxes/k8s-dashboard/external/kubernetes-dashboard-1.6.3.yaml deleted file mode 100644 index bde6d6df70..0000000000 --- a/k8s/lib/vagrant/boxes/k8s-dashboard/external/kubernetes-dashboard-1.6.3.yaml +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Configuration to deploy release version of the Dashboard UI compatible with -# Kubernetes 1.6 (RBAC enabled). -# -# Example usage: kubectl create -f - -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: kubernetes-dashboard - labels: - k8s-app: kubernetes-dashboard -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: -- kind: ServiceAccount - name: kubernetes-dashboard - namespace: kube-system ---- -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - replicas: 1 - revisionHistoryLimit: 10 - selector: - matchLabels: - k8s-app: kubernetes-dashboard - template: - metadata: - labels: - k8s-app: kubernetes-dashboard - spec: - containers: - - name: kubernetes-dashboard - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3 - ports: - - containerPort: 9090 - protocol: TCP - args: - # Uncomment the following line to manually specify Kubernetes API server Host - # If not specified, Dashboard will attempt to auto discover the API server and connect - # to it. Uncomment only if the default does not work. - # - --apiserver-host=http://my-address:port - livenessProbe: - httpGet: - path: / - port: 9090 - initialDelaySeconds: 30 - timeoutSeconds: 30 - serviceAccountName: kubernetes-dashboard - # Comment the following tolerations if Dashboard must not be deployed on master - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule ---- -kind: Service -apiVersion: v1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - ports: - - port: 80 - targetPort: 9090 - selector: - k8s-app: kubernetes-dashboard diff --git a/k8s/lib/vagrant/boxes/k8s-dashboard/fetch_dashboard.sh b/k8s/lib/vagrant/boxes/k8s-dashboard/fetch_dashboard.sh deleted file mode 100755 index e2ce2edc10..0000000000 --- a/k8s/lib/vagrant/boxes/k8s-dashboard/fetch_dashboard.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -distribution=${1:-"ubuntu"} - -sudo docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3 - -if [ "$distribution" = "ubuntu" ]; then - mkdir -p /home/ubuntu/setup/dashboard - cp /vagrant/boxes/k8s-dashboard/external/kubernetes-dashboard-1.6.3.yaml /home/ubuntu/setup/dashboard/ -else - mkdir -p /home/vagrant/setup/dashboard - cp /vagrant/boxes/k8s-dashboard/external/kubernetes-dashboard-1.6.3.yaml /home/vagrant/setup/dashboard/ -fi \ No newline at end of file diff --git a/k8s/lib/vagrant/boxes/k8s-kuberouter/external/kubeadm-kuberouter.yaml b/k8s/lib/vagrant/boxes/k8s-kuberouter/external/kubeadm-kuberouter.yaml deleted file mode 100644 index 1b64f41020..0000000000 --- a/k8s/lib/vagrant/boxes/k8s-kuberouter/external/kubeadm-kuberouter.yaml +++ /dev/null @@ -1,165 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: kube-router-cfg - namespace: kube-system - labels: - tier: node - k8s-app: kube-router -data: - cni-conf.json: | - { - "name":"kubernetes", - "type":"bridge", - "bridge":"kube-bridge", - "isDefaultGateway":true, - "ipam": { - "type":"host-local" - } - } ---- -apiVersion: extensions/v1beta1 -kind: DaemonSet -metadata: - labels: - k8s-app: kube-router - tier: node - name: kube-router - namespace: kube-system -spec: - template: - metadata: - labels: - k8s-app: kube-router - tier: node - annotations: - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - serviceAccountName: kube-router - serviceAccount: kube-router - containers: - - name: kube-router - image: cloudnativelabs/kube-router - imagePullPolicy: Always - args: - - --run-router=true - - --run-firewall=true - - --run-service-proxy=false - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - livenessProbe: - httpGet: - path: /healthz - port: 20244 - initialDelaySeconds: 10 - periodSeconds: 3 - resources: - requests: - cpu: 250m - memory: 250Mi - securityContext: - privileged: true - volumeMounts: - - name: lib-modules - mountPath: /lib/modules - readOnly: true - - name: cni-conf-dir - mountPath: /etc/cni/net.d - - name: kubeconfig - mountPath: /var/lib/kube-router/kubeconfig - readOnly: true - initContainers: - - name: install-cni - image: busybox - imagePullPolicy: Always - command: - - /bin/sh - - -c - - set -e -x; - if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then - TMP=/etc/cni/net.d/.tmp-kuberouter-cfg; - cp /etc/kube-router/cni-conf.json ${TMP}; - mv ${TMP} /etc/cni/net.d/10-kuberouter.conf; - fi - volumeMounts: - - mountPath: /etc/cni/net.d - name: cni-conf-dir - - mountPath: /etc/kube-router - name: kube-router-cfg - hostNetwork: true - tolerations: - - key: CriticalAddonsOnly - operator: Exists - - effect: NoSchedule - key: node-role.kubernetes.io/master - operator: Exists - volumes: - - name: lib-modules - hostPath: - path: /lib/modules - - name: cni-conf-dir - hostPath: - path: /etc/cni/net.d - - name: kube-router-cfg - configMap: - name: kube-router-cfg - - name: kubeconfig - hostPath: - path: /var/lib/kube-router/kubeconfig ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: kube-router - namespace: kube-system ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: kube-router - namespace: kube-system -rules: - - apiGroups: - - "" - resources: - - namespaces - - pods - - services - - nodes - - endpoints - verbs: - - list - - get - - watch - - apiGroups: - - "networking.k8s.io" - resources: - - networkpolicies - verbs: - - list - - get - - watch - - apiGroups: - - extensions - resources: - - networkpolicies - verbs: - - get - - list - - watch ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: kube-router -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: kube-router -subjects: -- kind: ServiceAccount - name: kube-router - namespace: kube-system diff --git a/k8s/lib/vagrant/boxes/k8s-kuberouter/fetch_kuberouter.sh b/k8s/lib/vagrant/boxes/k8s-kuberouter/fetch_kuberouter.sh deleted file mode 100644 index ed1b354716..0000000000 --- a/k8s/lib/vagrant/boxes/k8s-kuberouter/fetch_kuberouter.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/bin/bash -set -e -distribution=${1:-"ubuntu"} - -sudo docker pull busybox:latest -sudo docker pull cloudnativelabs/kube-router:latest - -if [ "$distribution" = "ubuntu" ]; then - mkdir -p /home/ubuntu/setup/cni/kuberouter - cp /vagrant/boxes/k8s-kuberouter/external/kubeadm-kuberouter.yaml /home/ubuntu/setup/cni/kuberouter/ -else - mkdir -p /home/vagrant/setup/cni/kuberouter - cp /vagrant/boxes/k8s-kuberouter/external/kubeadm-kuberouter.yaml /home/vagrant/setup/cni/kuberouter/ -fi diff --git a/k8s/lib/vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.6.yaml b/k8s/lib/vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.6.yaml deleted file mode 100644 index ede3fad2fe..0000000000 --- a/k8s/lib/vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.6.yaml +++ /dev/null @@ -1,182 +0,0 @@ -apiVersion: v1 -kind: List -items: - - apiVersion: v1 - kind: ServiceAccount - metadata: - name: weave-net - annotations: - cloud.weave.works/launcher-info: |- - { - "server-version": "master-3e85166", - "original-request": { - "url": "/k8s/v1.6/net", - "date": "Sat Sep 09 2017 12:33:59 GMT+0000 (UTC)" - }, - "email-address": "support@weave.works" - } - labels: - name: weave-net - namespace: kube-system - - apiVersion: rbac.authorization.k8s.io/v1beta1 - kind: ClusterRole - metadata: - name: weave-net - annotations: - cloud.weave.works/launcher-info: |- - { - "server-version": "master-3e85166", - "original-request": { - "url": "/k8s/v1.6/net", - "date": "Sat Sep 09 2017 12:33:59 GMT+0000 (UTC)" - }, - "email-address": "support@weave.works" - } - labels: - name: weave-net - rules: - - apiGroups: - - '' - resources: - - pods - - namespaces - - nodes - verbs: - - get - - list - - watch - - apiGroups: - - extensions - resources: - - networkpolicies - verbs: - - get - - list - - watch - - apiVersion: rbac.authorization.k8s.io/v1beta1 - kind: ClusterRoleBinding - metadata: - name: weave-net - annotations: - cloud.weave.works/launcher-info: |- - { - "server-version": "master-3e85166", - "original-request": { - "url": "/k8s/v1.6/net", - "date": "Sat Sep 09 2017 12:33:59 GMT+0000 (UTC)" - }, - "email-address": "support@weave.works" - } - labels: - name: weave-net - roleRef: - kind: ClusterRole - name: weave-net - apiGroup: rbac.authorization.k8s.io - subjects: - - kind: ServiceAccount - name: weave-net - namespace: kube-system - - apiVersion: extensions/v1beta1 - kind: DaemonSet - metadata: - name: weave-net - annotations: - cloud.weave.works/launcher-info: |- - { - "server-version": "master-3e85166", - "original-request": { - "url": "/k8s/v1.6/net", - "date": "Sat Sep 09 2017 12:33:59 GMT+0000 (UTC)" - }, - "email-address": "support@weave.works" - } - labels: - name: weave-net - namespace: kube-system - spec: - template: - metadata: - labels: - name: weave-net - spec: - containers: - - name: weave - command: - - /home/weave/launch.sh - env: - - name: HOSTNAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: spec.nodeName - image: 'weaveworks/weave-kube:2.0.4' - imagePullPolicy: Always - livenessProbe: - httpGet: - host: 127.0.0.1 - path: /status - port: 6784 - initialDelaySeconds: 30 - resources: - requests: - cpu: 10m - securityContext: - privileged: true - volumeMounts: - - name: weavedb - mountPath: /weavedb - - name: cni-bin - mountPath: /host/opt - - name: cni-bin2 - mountPath: /host/home - - name: cni-conf - mountPath: /host/etc - - name: dbus - mountPath: /host/var/lib/dbus - - name: lib-modules - mountPath: /lib/modules - - name: weave-npc - env: - - name: HOSTNAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: spec.nodeName - image: 'weaveworks/weave-npc:2.0.4' - imagePullPolicy: Always - resources: - requests: - cpu: 10m - securityContext: - privileged: true - hostNetwork: true - hostPID: true - restartPolicy: Always - securityContext: - seLinuxOptions: {} - serviceAccountName: weave-net - tolerations: - - effect: NoSchedule - operator: Exists - volumes: - - name: weavedb - hostPath: - path: /var/lib/weave - - name: cni-bin - hostPath: - path: /opt - - name: cni-bin2 - hostPath: - path: /home - - name: cni-conf - hostPath: - path: /etc - - name: dbus - hostPath: - path: /var/lib/dbus - - name: lib-modules - hostPath: - path: /lib/modules - updateStrategy: - type: RollingUpdate diff --git a/k8s/lib/vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.7.yaml b/k8s/lib/vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.7.yaml deleted file mode 100644 index 185288450f..0000000000 --- a/k8s/lib/vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.7.yaml +++ /dev/null @@ -1,121 +0,0 @@ -apiVersion: v1 -kind: List -items: - - apiVersion: v1 - kind: ServiceAccount - metadata: - name: weave-net - annotations: - cloud.weave.works/launcher-info: |- - { - "server-version": "master-3e85166", - "original-request": { - "url": "/k8s/v1.5/net.yaml", - "date": "Sat Sep 09 2017 06:48:20 GMT+0000 (UTC)" - }, - "email-address": "support@weave.works" - } - labels: - name: weave-net - namespace: kube-system - - apiVersion: extensions/v1beta1 - kind: DaemonSet - metadata: - name: weave-net - annotations: - cloud.weave.works/launcher-info: |- - { - "server-version": "master-3e85166", - "original-request": { - "url": "/k8s/v1.5/net.yaml", - "date": "Sat Sep 09 2017 06:48:20 GMT+0000 (UTC)" - }, - "email-address": "support@weave.works" - } - labels: - name: weave-net - namespace: kube-system - spec: - template: - metadata: - annotations: - scheduler.alpha.kubernetes.io/tolerations: >- - [{"key":"dedicated","operator":"Equal","value":"master","effect":"NoSchedule"}] - labels: - name: weave-net - spec: - containers: - - name: weave - command: - - /home/weave/launch.sh - env: - - name: HOSTNAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: spec.nodeName - image: 'weaveworks/weave-kube:2.0.4' - imagePullPolicy: Always - livenessProbe: - httpGet: - host: 127.0.0.1 - path: /status - port: 6784 - initialDelaySeconds: 30 - resources: - requests: - cpu: 10m - securityContext: - privileged: true - volumeMounts: - - name: weavedb - mountPath: /weavedb - - name: cni-bin - mountPath: /host/opt - - name: cni-bin2 - mountPath: /host/home - - name: cni-conf - mountPath: /host/etc - - name: dbus - mountPath: /host/var/lib/dbus - - name: lib-modules - mountPath: /lib/modules - - name: weave-npc - env: - - name: HOSTNAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: spec.nodeName - image: 'weaveworks/weave-npc:2.0.4' - imagePullPolicy: Always - resources: - requests: - cpu: 10m - securityContext: - privileged: true - hostNetwork: true - hostPID: true - restartPolicy: Always - securityContext: - seLinuxOptions: {} - serviceAccountName: weave-net - volumes: - - name: weavedb - hostPath: - path: /var/lib/weave - - name: cni-bin - hostPath: - path: /opt - - name: cni-bin2 - hostPath: - path: /home - - name: cni-conf - hostPath: - path: /etc - - name: dbus - hostPath: - path: /var/lib/dbus - - name: lib-modules - hostPath: - path: /lib/modules diff --git a/k8s/lib/vagrant/boxes/k8s-weave/fetch_weave.sh b/k8s/lib/vagrant/boxes/k8s-weave/fetch_weave.sh deleted file mode 100755 index aec0abb00e..0000000000 --- a/k8s/lib/vagrant/boxes/k8s-weave/fetch_weave.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/bin/bash -distribution=${1:-"ubuntu"} - -sudo docker pull weaveworks/weave-kube:2.0.4 -sudo docker pull weaveworks/weave-npc:2.0.4 - -if [ "$distribution" = "ubuntu" ]; then - mkdir -p /home/ubuntu/setup/cni/weave - cp /vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.6.yaml /home/ubuntu/setup/cni/weave/ -else - mkdir -p /home/vagrant/setup/cni/weave - cp /vagrant/boxes/k8s-weave/external/weave-daemonset-k8s-1.6.yaml /home/vagrant/setup/cni/weave/ -fi \ No newline at end of file diff --git a/k8s/lib/vagrant/boxes/k8s/cleanup_k8s.sh b/k8s/lib/vagrant/boxes/k8s/cleanup_k8s.sh deleted file mode 100755 index 39c3d8f178..0000000000 --- a/k8s/lib/vagrant/boxes/k8s/cleanup_k8s.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -kubeversion=${1:-"1.7.5"} -distribution=${2:-"ubuntu"} - -# To fix the '/var/lib/kubelet not empty' error -sudo kubeadm reset - -if [ "$distribution" = "ubuntu" ]; then - -# Remove the deb packages from the /vagrant/ folder. -rm -rf /vagrant/workdir/debpkgs -# Cleaning up apt and bash history before packaging the box. -sudo mkdir -p /etc/systemd/system/apt-daily.timer.d/ -cat < /dev/null -[Timer] -Persistent=false -EOF - -sudo systemctl disable apt-daily.service -sudo systemctl disable apt-daily.timer - -cat < /dev/null -APT::Periodic::Enable "0"; -EOF - -sudo apt-get clean -cat /dev/null > ~/.bash_history && history -c && exit -else -# Remove the rpm packages from the /vagrant/ folder. -rm -rf /vagrant/workdir/rpmpkgs - -cat /dev/null > ~/.bash_history && history -c && exit - -fi \ No newline at end of file diff --git a/k8s/lib/vagrant/boxes/k8s/fetch_k8scontainers.sh b/k8s/lib/vagrant/boxes/k8s/fetch_k8scontainers.sh deleted file mode 100755 index 2cb8933123..0000000000 --- a/k8s/lib/vagrant/boxes/k8s/fetch_k8scontainers.sh +++ /dev/null @@ -1,49 +0,0 @@ -#!/bin/bash - -kubeversion=${1:-"1.7.5"} -kuberegex_cni='^1[.][6-8][.][0-9][0-9]?$' - -# Get kubernetes containers from gcr.io -gcrUrlKube="gcr.io/google_containers/" -KUBEPACKAGES=(\ -kube-scheduler-amd64:v${kubeversion} \ -kube-apiserver-amd64:v${kubeversion} \ -kube-controller-manager-amd64:v${kubeversion} \ -kube-proxy-amd64:v${kubeversion} \ -) - -[[ $kubeversion =~ $kuberegex_cni ]] - -if [[ $? -eq 1 ]]; then - -gcrUrlExtra="gcr.io/google_containers/" -EXTRAPACKAGES=(\ -pause-amd64:3.0 \ -etcd-amd64:3.1.11 \ -k8s-dns-kube-dns-amd64:1.14.7 \ -k8s-dns-sidecar-amd64:1.14.7 \ -k8s-dns-dnsmasq-nanny-amd64:1.14.7 \ -) - -else - -gcrUrlExtra="gcr.io/google_containers/" -EXTRAPACKAGES=(\ -pause-amd64:3.0 \ -etcd-amd64:3.0.17 \ -k8s-dns-kube-dns-amd64:1.14.4 \ -k8s-dns-sidecar-amd64:1.14.4 \ -k8s-dns-dnsmasq-nanny-amd64:1.14.4 \ -) -fi - -# Pull kubernetes container images. -for i in "${!KUBEPACKAGES[@]}"; do - sudo docker pull $gcrUrlKube${KUBEPACKAGES[i]} -done - -# Pull kubernetes container images. -for i in "${!EXTRAPACKAGES[@]}"; do - sudo docker pull $gcrUrlExtra${EXTRAPACKAGES[i]} -done - diff --git a/k8s/lib/vagrant/boxes/k8s/fetch_kubeadm.sh b/k8s/lib/vagrant/boxes/k8s/fetch_kubeadm.sh deleted file mode 100755 index 3069846015..0000000000 --- a/k8s/lib/vagrant/boxes/k8s/fetch_kubeadm.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash -kubeversion=${1:-"1.7.5"} -distribution=${2:-"ubuntu"} -docker=${3:-"docker-cs"} - -if [ "$distribution" = "ubuntu" ]; then - #Update the repository index - apt-get update && apt-get install -y apt-transport-https ca-certificates software-properties-common \ - socat ebtables - - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - - - if [ "$docker" = "docker-ce" ]; then - - echo "Installing Docker CE..." - - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - - add-apt-repository \ - "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \ - $(lsb_release -cs) \ - stable" - - apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce \ - | grep 17.03 | head -1 | awk '{print $3}') - else - apt-get update - # Install docker if you don't have it already. - apt-get install -y docker.io - fi - -systemctl enable docker && systemctl start docker - -cd /vagrant/workdir/debpkgs - -sudo dpkg -i {kubernetes-cni*.deb,kubelet_${kubeversion}-00*.deb,kubectl_${kubeversion}-00*.deb,kubeadm_${kubeversion}-00*.deb} - -else - -sudo setenforce 0 - -sudo tee -a /etc/yum.repos.d/kubernetes.repo </dev/null -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF - -sudo tee -a /etc/sysctl.d/k8s.conf < /dev/null -net.bridge.bridge-nf-call-ip6tables = 1 -net.bridge.bridge-nf-call-iptables = 1 -EOF - -sudo sysctl --system - -echo "Installing Docker CE..." - -sudo yum install -y yum-utils \ -device-mapper-persistent-data \ -lvm2 socat ebtables - -sudo yum-config-manager \ ---add-repo \ -https://download.docker.com/linux/centos/docker-ce.repo - -sudo yum install -y --setopt=obsoletes=0 docker-ce-17.03.0.ce-1.el7.centos docker-ce-selinux-17.03.0.ce-1.el7.centos - -sudo systemctl enable docker && sudo systemctl start docker - - -cd /vagrant/workdir/rpmpkgs - -sudo rpm -i {*kubernetes-cni*.rpm,*kubelet-${kubeversion}-0*.rpm,*kubectl-${kubeversion}-0*.rpm,*kubeadm-${kubeversion}-0*.rpm} - -sudo sed -i -E 's/--cgroup-driver=systemd/--cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf - -fi -sudo systemctl enable kubelet && sudo systemctl start kubelet \ No newline at end of file diff --git a/k8s/lib/vagrant/boxes/k8s/prepare_k8s.sh b/k8s/lib/vagrant/boxes/k8s/prepare_k8s.sh deleted file mode 100755 index 6b5ced4594..0000000000 --- a/k8s/lib/vagrant/boxes/k8s/prepare_k8s.sh +++ /dev/null @@ -1,56 +0,0 @@ -#!/bin/bash -distribution=${1:-"ubuntu"} - -# Location of the k8s configure scripts -scriptloc="/vagrant/workdir/scripts/k8s" - -# Location of the sample k8s spec files -specloc="/vagrant/workdir/specs" - -# Download and install needed packages. -# Install JSON Parser (jq) for patching kube-proxy -if [ "$distribution" = "ubuntu" ]; then - - apt-get update - apt-get install -y unzip curl wget jq - - mkdir -p /home/ubuntu/setup/k8s - cd /home/ubuntu/setup/k8s - - cp ${scriptloc}/prepare_network.sh . - cp ${scriptloc}/configure_k8s_master.sh . - cp ${scriptloc}/configure_k8s_cred.sh . - cp ${scriptloc}/configure_k8s_cni.sh . - cp ${scriptloc}/configure_k8s_host.sh . - cp ${scriptloc}/configure_k8s_dashboard.sh . - - mkdir -p /home/ubuntu/demo/ - cd /home/ubuntu/demo/ - cp ${specloc}/demo-vdbench-openebs.yaml . - cp ${specloc}/demo-fio-openebs.yaml . -else - - yum install -y unzip curl wget - - wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 - chmod +x ./jq - sudo mv jq /usr/bin - - yum install -y iscsi-initiator-utils - systemctl enable iscsid && systemctl start iscsid - - mkdir -p /home/vagrant/setup/k8s - cd /home/vagrant/setup/k8s - - cp ${scriptloc}/prepare_network.sh . - cp ${scriptloc}/configure_k8s_master.sh . - cp ${scriptloc}/configure_k8s_cred.sh . - cp ${scriptloc}/configure_k8s_cni.sh . - cp ${scriptloc}/configure_k8s_host.sh . - cp ${scriptloc}/configure_k8s_dashboard.sh . - - mkdir -p /home/vagrant/demo/ - cd /home/vagrant/demo/ - cp ${specloc}/demo-vdbench-openebs.yaml . - cp ${specloc}/demo-fio-openebs.yaml . -fi \ No newline at end of file diff --git a/k8s/lib/vagrant/boxes/openebs/cleanup_openebs.sh b/k8s/lib/vagrant/boxes/openebs/cleanup_openebs.sh deleted file mode 100755 index 8713f11495..0000000000 --- a/k8s/lib/vagrant/boxes/openebs/cleanup_openebs.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/bin/bash - -# Cleaning up apt and bash history before packaging the box. -sudo mkdir -p /etc/systemd/system/apt-daily.timer.d/ -cat < /dev/null -[Timer] -Persistent=false -EOF - -sudo apt-get clean -cat /dev/null > ~/.bash_history && history -c && exit \ No newline at end of file diff --git a/k8s/lib/vagrant/boxes/openebs/prepare_openebs.sh b/k8s/lib/vagrant/boxes/openebs/prepare_openebs.sh deleted file mode 100755 index 6c414082e8..0000000000 --- a/k8s/lib/vagrant/boxes/openebs/prepare_openebs.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/bin/bash - -# Get the mode for installing - Master or Host. -releasetag=$1 - -# Get the releases from github. -releaseurl="https://api.github.com/repos/openebs/maya/releases" - -# Get the specs from github. -specurl="https://api.github.com/repos/openebs/openebs/contents\ -/k8s/demo/specs" - -# Get the scripts from github. -scripturl="https://api.github.com/repos/openebs/openebs/contents\ -/k8s/lib/scripts" - -# Get the bootstrap scripts from github. -bootstrapurl="https://raw.githubusercontent.com/openebs/maya/master\ -/scripts/install_bootstrap.sh" - -# For ubuntu/xenial64 only -useradd vagrant --password vagrant --home /home/vagrant --create-home -s /bin/bash -echo "vagrant ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/vagrant -mkdir -p /home/vagrant/.ssh -wget --no-check-certificate https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O /home/vagrant/.ssh/authorized_keys -chmod 0700 /home/vagrant/.ssh -chmod 0600 /home/vagrant/.ssh/authorized_keys -chown -R vagrant /home/vagrant/.ssh - -# Update apt and get dependencies -sudo apt-get update -sudo apt-get install -y unzip curl wget - -# Install Maya binaries -if [ -z "$releasetag" ]; then - -wget $(curl -sS $releaseurl | grep "browser_download_url" \ -| awk '{print $2}' | tr -d '"' | head -n 2 | tail -n 1) - -else - -wget "https://github.com/openebs/maya/releases/download/\ -$releasetag/maya-linux_amd64.zip" - -fi - -unzip maya-linux_amd64.zip -sudo mv maya /usr/bin -rm -rf maya-linux_amd64.zip - -mapfile -t scriptdownloadurls < <(curl -sS $scripturl \ -| grep "download_url" | awk '{print $2}' \ -| tr -d '",') - -mkdir -p /home/ubuntu/demo/maya/scripts -cd /home/ubuntu/demo/maya/scripts - -scriptlength=${#scriptdownloadurls[@]} -for ((i = 0; i != scriptlength; i++)); do - if [ -z "${scriptdownloadurls[i]##*configure_omm.sh*}" -o \ - -z "${scriptdownloadurls[i]##*configure_osh.sh*}" ] ;then - wget "${scriptdownloadurls[i]}" - fi -done - -wget $bootstrapurl - -mapfile -t specdownloadurls < <(curl -sS $specurl \ -| grep "download_url" | awk '{print $2}' \ -| tr -d '",') - -#Create demo directory and download specs -mkdir -p /home/ubuntu/demo/maya/spec -cd /home/ubuntu/demo/maya/spec - -speclength=${#specdownloadurls[@]} -for ((i = 0; i != speclength; i++)); do - if [ -z "${specdownloadurls[i]##*hcl*}" ] ;then - wget "${specdownloadurls[i]}" - fi -done - diff --git a/k8s/lib/vagrant/boxes/ubuntu-xenial/prepare_network.sh b/k8s/lib/vagrant/boxes/ubuntu-xenial/prepare_network.sh deleted file mode 100755 index 1909e890b5..0000000000 --- a/k8s/lib/vagrant/boxes/ubuntu-xenial/prepare_network.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash - -# For ubuntu/xenial64 boxes created via vagrant -# /etc/resolv.conf is set with nameserver as 10.0.2.3 -# Change this to 127.0.0.1 - -sudo sed -i "s/10\.0\.2\.3/8\.8\.8\.8/g" /etc/resolv.conf -sudo sed -i "s/cbblr\.com/domain\.name/g" /etc/resolv.conf - diff --git a/k8s/lib/vagrant/boxes/ubuntu-xenial/prepare_vagrant_user.sh b/k8s/lib/vagrant/boxes/ubuntu-xenial/prepare_vagrant_user.sh deleted file mode 100755 index 186956c6f0..0000000000 --- a/k8s/lib/vagrant/boxes/ubuntu-xenial/prepare_vagrant_user.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/bin/bash - -# For ubuntu/xenial64 only -# TODO: why is this only for ubuntu/xenial64 - -useradd vagrant --password vagrant \ - --home /home/vagrant \ - --create-home -s /bin/bash - -echo "vagrant ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/vagrant -mkdir -p /home/vagrant/.ssh -wget --no-check-certificate \ - https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub \ - -O /home/vagrant/.ssh/authorized_keys - -chmod 0700 /home/vagrant/.ssh -chmod 0600 /home/vagrant/.ssh/authorized_keys -chown -R vagrant /home/vagrant/.ssh - diff --git a/k8s/lib/vagrant/create_vagrantbox.sh b/k8s/lib/vagrant/create_vagrantbox.sh deleted file mode 100755 index a40233fd5a..0000000000 --- a/k8s/lib/vagrant/create_vagrantbox.sh +++ /dev/null @@ -1,279 +0,0 @@ -#!/bin/bash -#set -x -kubeversion= -distribution= -docker_version= -kuberegex='^[1-9][.][0-9][0-9]?[.][0-9][0-9]?$' -kuberegex_cni='^1[.][6-8][.][0-9][0-9]?$' - -debpackageurl="https://packages.cloud.google.com/apt/dists/kubernetes-xenial/main/binary-amd64/Packages" -rpmpackageurl="https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/primary.xml" - -function show_help() { - cat << EOF -Usage: $(basename "$0") --kube-version= [--base-os=] -Creates a vagrant box with the provided Kubernetes version. - -Options and arguments for the tool. ---help Display this help and exit. ---kube-version Kubemaster Version to be used for the cluster. Example:- 1.8.0, 1.8.5, 1.9.0. ---base-os Linux Distribution to be used for creating the vagrant box. Supported:- ubuntu & centos. - Defaults to "ubuntu" -EOF -} - -if (($# == 0)); then - show_help - exit 2 -fi - -while :; do - case $1 in - -h|-\?|--help) # Call a "show_help" function to - # display a synopsis, then exit. - show_help - exit - ;; - - --kube-version) # Takes an option argument, - # ensuring it has been specified. - if [ -n "$2" ]; then - - if [[ "$2" =~ $kuberegex ]]; then - kubeversion=$2 - else - printf 'ERROR: Invalid Kubernetes Version.\n' >&2 - show_help - exit 1 - fi - - shift - else - printf 'ERROR: "--kube-version" requires a non-empty option argument.\n\n' >&2 - show_help - exit 1 - fi - ;; - - --kube-version=?*) # Delete everything up to "=" - # and assign the remainder. - if [[ "${1#*=}" =~ $kuberegex ]]; then - kubeversion=${1#*=} - else - printf 'ERROR: Invalid Kubernetes Version.\n\n' >&2 - show_help - exit 1 - fi - ;; - - --kube-version=) # Handle the case of an empty --masterip= - printf 'ERROR: "--kube-version" requires a non-empty option argument.\n\n' >&2 - show_help - exit 1 - ;; - - --base-os) # Takes an option argument, - # ensuring it has been specified. - if [ -n "$2" ]; then - distribution="$(echo $2 | tr '[:upper:]' '[:lower:]')" - shift - else - printf 'ERROR: "--base-os" requires a non-empty option argument.\n' >&2 - show_help - exit 1 - fi - ;; - - --base-os=?*) # Delete everything up to "=" - # and assign the remainder. - if [ -n "${1#*=}" ]; then - distribution="$(echo ${1#*=} | tr '[:upper:]' '[:lower:]')" - shift - else - printf 'ERROR: "--base-os" requires a non-empty option argument.\n' >&2 - show_help - exit 1 - fi - ;; - - --base-os=) # Handle the case of an empty --base-os= - distribution="ubuntu" - ;; - - --) # End of all options. - shift - break - ;; - - -?*) - printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2 - ;; - - *) # Default case: If no more options - # then break out of the loop. - break - esac -shift -done - -if [ -z "$kubeversion" ]; then - echo "Kubernetes version is mandatory." - show_help - exit -fi - -if [ -z "$distribution" ]; then - echo Defaulting to Ubuntu Distro. - distribution="ubuntu" -fi - -function fetch_k8s_scripts(){ - mkdir -p workdir/scripts/k8s/ - cp ../scripts/configure_k8s_master.sh workdir/scripts/k8s/ - sed -i "s/.*kubeversion=.*/kubeversion=v${kubeversion}/g" workdir/scripts/k8s/configure_k8s_master.sh - - - cp boxes/ubuntu-xenial/prepare_network.sh workdir/scripts/k8s - cp ../scripts/configure_k8s_host.sh workdir/scripts/k8s/ - cp ../scripts/configure_k8s_cred.sh workdir/scripts/k8s/ - cp ../scripts/configure_k8s_dashboard.sh workdir/scripts/k8s/ - cp ../scripts/configure_k8s_cni.sh workdir/scripts/k8s/ - -} - -function fetch_specs(){ - mkdir -p workdir/specs - cp ../../demo/specs/demo-vdbench-openebs.yaml workdir/specs/ - cp ../../demo/specs/demo-fio-openebs.yaml workdir/specs/ -} - -function fetch_k8s_debpkgs(){ - mkdir -p workdir/debpkgs - - mapfile -t packagedownloadurls < <(curl -sS $debpackageurl \ - | grep _$kubeversion | awk '{print $2}' \ - | cut -d '/' -f2) - - length=${#packagedownloadurls[@]} - - if [ "$length" -eq 0 ]; then - echo "Unable to download packages for the specified Version." - echo "Run the script again. If the problem persists try with a different Version." - cleanup - exit - fi - - for ((i = 0; i != length; i++)); do - wget "https://packages.cloud.google.com/apt/pool/${packagedownloadurls[i]}" -P workdir/debpkgs - done - - [[ $kubeversion =~ $kuberegex_cni ]] - - if [[ $? -eq 1 ]]; then - wget https://packages.cloud.google.com/apt/pool/kubernetes-cni_0.6.0-00_amd64_43460dd3c97073851f84b32f5e8eebdc84fadedb5d5a00d1fc6872f30a4dd42c.deb \ - -P workdir/debpkgs - else - wget https://packages.cloud.google.com/apt/pool/kubernetes-cni_0.5.1-00_amd64_08cbe5c42366ec3184cc91a4353f6e066f2d7325b4db1cb4f87c7dcc8c0eb620.deb \ - -P workdir/debpkgs - fi -} - -function fetch_k8s_rpmpkgs(){ - mkdir -p workdir/rpmpkgs - - mapfile -t packagedownloadurls < <(curl -sS $rpmpackageurl \ - | grep -- -$kubeversion | grep "location href" \ - | awk '{print $2}' | cut -d '/' -f4 | cut -d '"' -f1) - - - length=${#packagedownloadurls[@]} - - if [ "$length" -eq 0 ]; then - echo "Unable to download packages for the specified Version." - echo "Run the script again. If the problem persists try with a different Version." - cleanup - exit - fi - - for ((i = 0; i != length; i++)); do - wget "https://packages.cloud.google.com/yum/pool/${packagedownloadurls[i]}" -P workdir/rpmpkgs - done - - [[ $kubeversion =~ $kuberegex_cni ]] - - if [[ $? -eq 1 ]]; then - - wget https://packages.cloud.google.com/yum/pool/fe33057ffe95bfae65e2f269e1b05e99308853176e24a4d027bc082b471a07c0-kubernetes-cni-0.6.0-0.x86_64.rpm \ - -P workdir/rpmpkgs - else - - wget https://packages.cloud.google.com/yum/pool/e7a4403227dd24036f3b0615663a371c4e07a95be5fee53505e647fd8ae58aa6-kubernetes-cni-0.5.1-0.x86_64.rpm \ - -P workdir/rpmpkgs - - fi -} - -function cleanup(){ - rm -rf workdir -} - -echo Download Kubernetes Packages -if [ "$distribution" = "ubuntu" ]; then - echo Choose the Docker installation: - select docker in "Docker CE" "Docker Engine" - do - case $docker in - "Docker CE"|"Docker Engine") - break - ;; - *) - echo "Invalid area" - ;; - esac - done - - if [ "$docker" = "Docker CE" ]; then - docker_version="docker-ce" - else - docker_version="docker-cs" - fi - fetch_k8s_debpkgs -else - docker_version="docker-ce" - fetch_k8s_rpmpkgs -fi - -echo Gathering all the K8s configure scripts to be package -fetch_k8s_scripts - -echo Gathering sample k8s specs -fetch_specs - -echo Launch VM - -KUBE_VERSION=${kubeversion} DISTRIBUTION=${distribution} DOCKER=${docker_version} vagrant up -vagrant package --output workdir/kubernetes-${kubeversion}-${distribution}.box - -echo Test the new box -vagrant box add --name openebs/k8s-test-box --force workdir/kubernetes-${kubeversion}-${distribution}.box -mkdir workdir/test -currdir=`pwd` -cp test/k8s/Vagrantfile workdir/test/ -cd workdir/test; - -if [ "$distribution" = "centos" ]; then - sudo sed -i 's/vmCfg.ssh.username = "ubuntu"/vmCfg.ssh.username = "vagrant"/g' Vagrantfile - sudo sed -i 's/echo "ubuntu:ubuntu"/echo "vagrant:vagrant"/g' Vagrantfile -fi -vagrant up -#vagrant destroy -f -#vagrant box remove openebs/k8s-test-box -#cd $currdir - -echo Destroy the default vm -#vagrant destroy default - -echo Clear working directory -#cleanup - - diff --git a/k8s/lib/vagrant/patch/Vagrantfile b/k8s/lib/vagrant/patch/Vagrantfile deleted file mode 100644 index 508d1c5c48..0000000000 --- a/k8s/lib/vagrant/patch/Vagrantfile +++ /dev/null @@ -1,117 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# All Vagrant configuration is done below. The "2" in Vagrant.configure -# configures the configuration version (we support older styles for -# backwards compatibility). Please don't change it unless you know what -# you're doing. - -BOX_MODE_OPENEBS = 1 -BOX_MODE_KUBERNETES = 2 - -box_Mode=ENV['OPENEBS_BUILD_BOX'] || 2 - -Vagrant.configure("2") do |config| - # The most common configuration options are documented and commented below. - # For a complete reference, please see the online documentation at - # https://docs.vagrantup.com. - - if ((box_Mode.to_i < BOX_MODE_OPENEBS.to_i) || \ - (box_Mode.to_i > BOX_MODE_KUBERNETES.to_i)) - - puts "Invalid value set for OPENEBS_BUILD_BOX." - puts "Usage: OPENEBS_BUILD_BOX=1 for OpenEBS." - puts "Usage: OPENEBS_BUILD_BOX=2 for Kubernetes." - puts "Defaulting to OpenEBS..." - puts "Do you want to continue?(y/n):" - - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - - box_Mode = 1 - end - - # Every Vagrant development environment requires a box. You can search for - # boxes at https://atlas.hashicorp.com/search. - config.vm.box = "openebs/k8s-1.7.5" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - config.ssh.username = "ubuntu" - config.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Disable automatic box update checking. If you disable this, then - # boxes will only be checked for updates when the user runs - # `vagrant box outdated`. This is not recommended. - config.vm.box_check_update = false - - # Create a forwarded port mapping which allows access to a specific port - # within the machine from a port on the host machine. In the example below, - # accessing "localhost:8080" will access port 80 on the guest machine. - # config.vm.network "forwarded_port", guest: 80, host: 8080 - - # Create a private network, which allows host-only access to the machine - # using a specific IP. - # config.vm.network "private_network", ip: "192.168.33.10" - - # Create a public network, which generally matched to bridged network. - # Bridged networks make the machine appear as another physical device on - # your network. - # config.vm.network "public_network" - - # Share an additional folder to the guest VM. The first argument is - # the path on the host to the actual folder. The second argument is - # the path on the guest to mount the folder. And the optional third - # argument is a set of non-required options. - # config.vm.synced_folder "../data", "/vagrant_data" - - # Provider-specific configuration so you can fine-tune various - # backing providers for Vagrant. These expose provider-specific options. - # Example for VirtualBox: - # - # config.vm.provider "virtualbox" do |vb| - # # Display the VirtualBox GUI when booting the machine - # vb.gui = true - # - # # Customize the amount of memory on the VM: - # vb.memory = "1024" - # end - # - # View the documentation for the provider you are using for more - # information on available options. - - # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies - # such as FTP and Heroku are also available. See the documentation at - # https://docs.vagrantup.com/v2/push/atlas.html for more information. - # config.push.define "atlas" do |push| - # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME" - # end - - # Enable provisioning with a shell script. Additional provisioners such as - # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the - # documentation for more information about their specific syntax and use. - # config.vm.provision "shell", inline: <<-SHELL - # apt-get update - # apt-get install -y apache2 - # SHELL - if box_Mode.to_i == BOX_MODE_KUBERNETES.to_i - config.vm.provision :shell, - path: "patch_k8s.sh", - privileged: true - elsif box_Mode.to_i == BOX_MODE_OPENEBS.to_i - config.vm.provision :shell, - path: "patch_openebs.sh", - privileged: true - end -end diff --git a/k8s/lib/vagrant/patch/patch_k8s.sh b/k8s/lib/vagrant/patch/patch_k8s.sh deleted file mode 100755 index 92afab612b..0000000000 --- a/k8s/lib/vagrant/patch/patch_k8s.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/bash - -# Insert the patch instructions here - -# Patch 01 - Copy the latest configure_k8s_weave.sh -# Location of the k8s configure scripts -scriptloc="/vagrant/workdir/scripts/k8s" -cd /home/ubuntu/setup/k8s -cp ${scriptloc}/configure_k8s_weave.sh . -# END Patch 01 - - -# DONOT MODIFY BELOW THIS LINE -# Cleaning up apt and bash history before packaging the box. -sudo mkdir -p /etc/systemd/system/apt-daily.timer.d/ -cat < /dev/null -[Timer] -Persistent=false -EOF - -sudo apt-get clean -cat /dev/null > ~/.bash_history && history -c && exit diff --git a/k8s/lib/vagrant/patch/patch_openebs.sh b/k8s/lib/vagrant/patch/patch_openebs.sh deleted file mode 100755 index 19340bc89e..0000000000 --- a/k8s/lib/vagrant/patch/patch_openebs.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash - -# Insert the patch instructions here - - -# DONOT MODIFY BELOW THIS LINE -# Cleaning up apt and bash history before packaging the box. -# Cleaning up apt and bash history before packaging the box. -sudo mkdir -p /etc/systemd/system/apt-daily.timer.d/ -cat < /dev/null -[Timer] -Persistent=false -EOF - -sudo apt-get clean -cat /dev/null > ~/.bash_history && history -c && exit diff --git a/k8s/lib/vagrant/patch/patch_vagrantbox.sh b/k8s/lib/vagrant/patch/patch_vagrantbox.sh deleted file mode 100755 index 71de9ce22d..0000000000 --- a/k8s/lib/vagrant/patch/patch_vagrantbox.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash - -kubeversion="1.7.5" - -function fetch_k8s_scripts(){ - mkdir -p workdir/scripts/k8s/ - cp ../../scripts/configure_k8s_master.sh workdir/scripts/k8s/ - cp ../../scripts/configure_k8s_host.sh workdir/scripts/k8s/ - cp ../../scripts/configure_k8s_weave.sh workdir/scripts/k8s/ - cp ../../scripts/configure_k8s_cred.sh workdir/scripts/k8s/ - cp ../../scripts/configure_k8s_dashboard.sh workdir/scripts/k8s/ -} - -function fetch_specs(){ - mkdir -p workdir/specs - cp ../../../demo/specs/demo-vdbench-openebs.yaml workdir/specs/ - cp ../../../demo/specs/demo-fio-openebs.yaml workdir/specs/ -} - -function cleanup(){ - rm -rf workdir -} - -#echo Gathering all the K8s configure scripts to be package -fetch_k8s_scripts - -#echo Gathering sample k8s specs -fetch_specs - -#echo Launch VM -vagrant up -vagrant package --output workdir/kubernetes-${kubeversion}.box - -#echo Test the new box -mkdir -p workdir/test -vagrant box add --name openebs/k8s-test-box --force workdir/kubernetes-${kubeversion}.box -currdir=`pwd` -echo Launch Test VM -cp ../test/k8s/Vagrantfile workdir/test/ -cd workdir/test; -vagrant up -#vagrant destroy -f -#vagrant box remove openebs/k8s-test-box -#cd $currdir - -echo Destroy the default vm -#vagrant destroy default - -echo Clear working directory -#cleanup - - diff --git a/k8s/lib/vagrant/test/Vagrantfile b/k8s/lib/vagrant/test/Vagrantfile deleted file mode 100644 index 1ece96e80e..0000000000 --- a/k8s/lib/vagrant/test/Vagrantfile +++ /dev/null @@ -1,64 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# All Vagrant configuration is done below. The "2" in Vagrant.configure -# configures the configuration version (we support older styles for -# backwards compatibility). Please don't change it unless you know what -# you're doing. - -BOX_MODE_OPENEBS = 1 -BOX_MODE_KUBERNETES = 2 - -box_Mode=ENV['OPENEBS_BUILD_BOX'] || 2 - -Vagrant.configure("2") do |config| - # The most common configuration options are documented and commented below. - # For a complete reference, please see the online documentation at - # https://docs.vagrantup.com. - - if ((box_Mode.to_i < BOX_MODE_OPENEBS.to_i) || (box_Mode.to_i > BOX_MODE_KUBERNETES.to_i)) - puts "Invalid value set for OPENEBS_BUILD_BOX." - puts "Usage: OPENEBS_BUILD_BOX=1 for OpenEBS." - puts "Usage: OPENEBS_BUILD_BOX=2 for Kubernetes." - puts "Defaulting to OpenEBS..." - puts "Do you want to continue?(y/n):" - - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - - box_Mode = 1 - end - - # Disable automatic box update checking. If you disable this, then - # boxes will only be checked for updates when the user runs - # `vagrant box outdated`. This is not recommended. - config.vm.box_check_update = false - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - config.ssh.username = "ubuntu" - config.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - if box_Mode.to_i == BOX_MODE_KUBERNETES.to_i - config.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "k8s-console.log")] - end - config.vm.box = "openebs/k8s-1.7" - elsif box_Mode.to_i == BOX_MODE_OPENEBS.to_i - config.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-console.log")] - end - config.vm.box = "ubuntu/xenial64" - end -end diff --git a/k8s/lib/vagrant/test/k8s/1.6/README.md b/k8s/lib/vagrant/test/k8s/1.6/README.md deleted file mode 100644 index 61c90ff14d..0000000000 --- a/k8s/lib/vagrant/test/k8s/1.6/README.md +++ /dev/null @@ -1,54 +0,0 @@ -# Installing Kubernetes 1.6 and OpenEBS Clusters on Ubuntu 16.04 - -This Vagrantfile helps in setting up VirtualBox VMs with the following configuration: -- Kubernetes 1.6 Cluster with Master and Minion nodes using kubeadm -- OpenEBS Cluster ( on dedicated VMs) - -This Vagrantfile can be used on laptop or Baremetal server installed with Ubuntu 16.04 and Virtualization Enabled - - -## Prerequisites - -Verify that you have the following required software installed on your Ubuntu 16.04 machine: -``` -1.Vagrant (>=1.9.1) -2.VirtualBox 5.1 -3.curl or wget or git, etc., to download the Vagrant file. -``` - -## Download and Verify - -Setup your local directory, where the demo code will be downloaded. Let us call this as $demo-folder - -``` -mkdir k8s-demo -cd k8s-demo -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/lib/vagrant/test/k8s/1.6/Vagrantfile -vagrant status -``` - -### Verify - -You should see output similar to this: -``` -ubuntu-host:~/k8s-demo$ vagrant status -Current machine states: - -kubemaster-01 not created (virtualbox) -kubeminion-01 not created (virtualbox) -omm-01 not created (virtualbox) -osh-01 not created (virtualbox) -osh-02 not created (virtualbox) - -This environment represents multiple VMs. The VMs are all listed -above with their current state. For more information about a specific -VM, run `vagrant status NAME`. -``` - -## Bringing up K8s Cluster - -Just use *vagrant up* to bring up the cluster. - -``` -ubuntu-host:~/k8s-demo$ vagrant up -``` diff --git a/k8s/lib/vagrant/test/k8s/1.6/Vagrantfile b/k8s/lib/vagrant/test/k8s/1.6/Vagrantfile deleted file mode 100644 index db0dce50e1..0000000000 --- a/k8s/lib/vagrant/test/k8s/1.6/Vagrantfile +++ /dev/null @@ -1,408 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -# - Launch VMs auto-configured with kubernetes cluster with dedicated openebs -# - Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -# - Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -# - Specify the number of VMs / node types -# - Specify the CPU/RAM for each type of node -# - Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -# - Specify the base operating system - Ubuntu, CentOS, etc., -# - Specify the kubernetes pod network - flannel, weave, calico, etc,. -# - In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 1024 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.6" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_weave.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.6" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_cluster_ip = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'kubectl get svc -o yaml | grep clusterIP \ - | cut -d ":" -f2 | cut -d " " -f2') - - cluster_ip = `#{get_cluster_ip}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - - @machine.communicate.sudo("bash \ - /home/ubuntu/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --clusterip=#{cluster_ip.strip}") - - @machine.communicate.sudo("sudo systemctl daemon-reload") - @machine.communicate.sudo("sudo systemctl restart kubelet") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - - # Maya Master related only !! - 1.upto(MM_NODES.to_i) do |i| - hostname = "omm-%02d" % [i] - cpus = MM_CPUS - mem = MM_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Install OpenEBS Maya Master - if MAYA_RELEASE_TAG == "" - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - privileged: true - else - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - :args => "--releasetag=#{MAYA_RELEASE_TAG}", - privileged: true - end - - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - end - end - end - end - end - end - - # Maya Host related only !! - 1.upto(MH_NODES.to_i) do |i| - hostname = "osh-%02d" % [i] - cpus = MH_CPUS - mem = MH_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP to join the cluster..." - master_hostname = "omm-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The OpenEBS Maya Master is down, \ - bring it up and manually run: \ - configure_osh.sh script on OpenEBS Storage Host." - else - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - if MAYA_RELEASE_TAG == "" - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip}") - else - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip} \ - --releasetag=#{MAYA_RELEASE_TAG}") - end - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - - info"Fetching the latest jiva image" - - @machine.communicate.sudo("docker pull \ - openebs/jiva") - end - end - end - end - end - end - end -end diff --git a/k8s/lib/vagrant/test/k8s/Vagrantfile b/k8s/lib/vagrant/test/k8s/Vagrantfile deleted file mode 100644 index b2493b2c1e..0000000000 --- a/k8s/lib/vagrant/test/k8s/Vagrantfile +++ /dev/null @@ -1,288 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -# - Launch VMs auto-configured with kubernetes cluster with dedicated openebs -# - Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -# - Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -# - Specify the number of VMs / node types -# - Specify the CPU/RAM for each type of node -# - Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -# - Specify the base operating system - Ubuntu, CentOS, etc., -# - Specify the kubernetes pod network - flannel, weave, calico, etc,. -# - In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -distro=ENV['DISTRIBUTION'] || "ubuntu" - -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 2048 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ip addr show | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) - -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem, distro) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-test-box" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cni.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-test-box" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ip addr show | grep -oP \ - "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_token_sha = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ - | openssl rsa -pubin -outform der 2>/dev/null \ - | openssl dgst -sha256 -hex \ - | sed "s/^.* //"') - - token_sha = `#{get_token_sha}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - info"Using Discovery Token SHA - #{token_sha.strip}" - - @machine.communicate.sudo("bash \ - /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --token-sha=#{token_sha.strip}") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end -end diff --git a/k8s/openebs-alertmanager.yaml b/k8s/openebs-alertmanager.yaml deleted file mode 100644 index b40d61aa52..0000000000 --- a/k8s/openebs-alertmanager.yaml +++ /dev/null @@ -1,99 +0,0 @@ ---- -kind: ConfigMap -apiVersion: v1 -metadata: - name: openebs-alertmanager-config - namespace: openebs -data: - config.yml: |- - global: - smtp_smarthost: 'localhost:25' - smtp_from: 'alertmanager@openebs.io' - smtp_auth_username: 'alertmanager' - smtp_auth_password: 'password' - - templates: - - '/etc/alertmanager-templates/*.tmpl' - - route: - group_by: ['alertname', 'cluster', 'service'] - group_wait: 10s - group_interval: 1m - repeat_interval: 5m - receiver: default - - routes: - - receiver: devops - continue: true - match: - team: devops - - inhibit_rules: - - source_match: - severity: 'critical' - target_match: - severity: 'warning' - equal: ['alertname', 'cluster', 'service'] - - receivers: - - name: 'default' - - - name: 'devops' - email_configs: - - to: devops@testemail.io ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: alertmanager - namespace: openebs -spec: - replicas: 1 - selector: - matchLabels: - app: alertmanager - template: - metadata: - name: alertmanager - labels: - app: alertmanager - spec: - containers: - - name: alertmanager - image: prom/alertmanager:v0.18.0 - args: - - '--config.file=/etc/alertmanager/config.yml' - - '--storage.path=/alertmanager' - ports: - - name: alertmanager - containerPort: 9093 - volumeMounts: - - name: config-volume - mountPath: /etc/alertmanager - - name: alertmanager - mountPath: /alertmanager - volumes: - - name: config-volume - configMap: - name: openebs-alertmanager-config - - name: alertmanager - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - annotations: - prometheus.io/scrape: 'true' - prometheus.io/path: '/metrics' - labels: - name: alertmanager - name: alertmanager - namespace: openebs -spec: - selector: - app: alertmanager - ports: - - name: alertmanager - protocol: TCP - port: 9093 - targetPort: 9093 diff --git a/k8s/openebs-kube-state-metrics.json b/k8s/openebs-kube-state-metrics.json deleted file mode 100644 index ca15b26713..0000000000 --- a/k8s/openebs-kube-state-metrics.json +++ /dev/null @@ -1,1400 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_OPENEBS", - "label": "prometheus", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "4.1.1" - }, - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "1.0.0" - } - ], - "annotations": { - "list": [] - }, - "editable": true, - "gnetId": 1471, - "graphTooltip": 1, - "hideControls": false, - "id": null, - "links": [], - "refresh": "30s", - "rows": [ - { - "collapse": false, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 3, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(http_requests_total{app=\"$container\", handler!=\"prometheus\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app,code)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "native | {{code}}", - "refId": "A", - "step": 10 - }, - { - "expr": "sum(irate(nginx_http_requests_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app,status)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "nginx | {{status}}", - "refId": "B", - "step": 10 - }, - { - "expr": "sum(irate(haproxy_backend_http_responses_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,code)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "haproxy | {{code}}", - "refId": "C", - "step": 10 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Request rate", - "tooltip": { - "msResolution": true, - "shared": false, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "ops", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "fill": 1, - "id": 15, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(haproxy_backend_http_responses_total{app=\"$container\", kubernetes_namespace=\"$namespace\",code=\"5xx\"}[30s])) by (app,kubernetes_namespace) / sum(irate(haproxy_backend_http_responses_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "haproxy", - "refId": "A", - "step": 20 - }, - { - "expr": "sum(irate(http_requests_total{app=\"$container\", handler!=\"prometheus\", kubernetes_namespace=\"$namespace\", code=~\"5[0-9]+\"}[30s])) by (kubernetes_namespace,app) / sum(irate(http_requests_total{app=\"$container\", handler!=\"prometheus\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app)", - "intervalFactor": 2, - "legendFormat": "native", - "refId": "B", - "step": 20 - }, - { - "expr": "sum(irate(nginx_http_requests_total{app=\"$container\", kubernetes_namespace=\"$namespace\", status=~\"5[0-9]+\"}[30s])) by (kubernetes_namespace,app) / sum(irate(nginx_http_requests_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app)", - "intervalFactor": 2, - "legendFormat": "nginx", - "refId": "C", - "step": 20 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Error rate", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Request rate", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 224, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 5, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "max", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "native | 0.99", - "refId": "A", - "step": 1 - }, - { - "expr": "histogram_quantile(0.90, sum(rate(http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "native | 0.90", - "refId": "B", - "step": 1 - }, - { - "expr": "histogram_quantile(0.5, sum(rate(http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "interval": "", - "intervalFactor": 1, - "legendFormat": "native | 0.50", - "refId": "C", - "step": 1 - }, - { - "expr": "histogram_quantile(0.99, sum(rate(nginx_http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "nginx | 0.99", - "refId": "D", - "step": 1 - }, - { - "expr": "histogram_quantile(0.9, sum(rate(nginx_http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "nginx | 0.90", - "refId": "E", - "step": 1 - }, - { - "expr": "histogram_quantile(0.5, sum(rate(nginx_http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "nginx | 0.50", - "refId": "F", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Response time percentiles", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "s", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Response time", - "titleSize": "h6" - }, - { - "collapse": false, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "id": 7, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "count(count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (pod_name))", - "interval": "", - "intervalFactor": 1, - "legendFormat": "pods", - "refId": "A", - "step": 5 - }, - { - "expr": "count(count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (kubernetes_io_hostname))", - "interval": "", - "intervalFactor": 2, - "legendFormat": "hosts", - "refId": "B", - "step": 10 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Number of pods", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Pod count", - "titleSize": "h6" - }, - { - "collapse": false, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "id": 12, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ - { - "alias": "elasticsearch-logging-data-20170207a (logging) - system", - "color": "#BF1B00" - }, - { - "alias": "elasticsearch-logging-data-20170207a (logging) - user", - "color": "#508642" - } - ], - "span": 12, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_system_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name)", - "intervalFactor": 2, - "legendFormat": "system", - "refId": "C", - "step": 10 - }, - { - "expr": "sum(irate(container_cpu_user_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "user", - "refId": "B", - "step": 10 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (relative to request)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": "", - "logBase": 1, - "max": "1", - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage relative to request", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "id": 10, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "metric": "", - "refId": "A", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (relative to limit)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": "", - "logBase": 1, - "max": "1", - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "id": 11, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "refId": "A", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (relative to limit)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": null, - "logBase": 1, - "max": "1", - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage relative to limit", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "fill": 1, - "id": 13, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (id,pod_name)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{pod_name}}", - "refId": "A", - "step": 2 - }, - { - "expr": "sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "intervalFactor": 2, - "legendFormat": "limit", - "refId": "B", - "step": 2 - }, - { - "expr": "sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name) / count(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "intervalFactor": 2, - "legendFormat": "request", - "refId": "C", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (per pod)", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": "cores", - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "fill": 1, - "id": 14, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (id,pod_name)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{pod_name}}", - "refId": "A", - "step": 2 - }, - { - "expr": "sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (container_name,namespace)", - "intervalFactor": 2, - "legendFormat": "limit", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (per pod)", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage per pod", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "id": 8, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name) / count(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "request", - "refId": "C", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (avg per pod)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "id": 9, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "intervalFactor": 1, - "legendFormat": "actual", - "metric": "", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (avg per pod)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage per pod (average)", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 259.4375, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 1, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name)", - "hide": false, - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "metric": "", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name) ", - "intervalFactor": 1, - "legendFormat": "request", - "refId": "C", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (total)", - "tooltip": { - "msResolution": true, - "shared": false, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 2, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (total)", - "tooltip": { - "msResolution": true, - "shared": false, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage total", - "titleSize": "h6" - } - ], - "schemaVersion": 14, - "style": "dark", - "tags": [], - "templating": { - "list": [ - { - "allValue": ".+", - "current": {}, - "datasource": "${DS_OPENEBS}", - "hide": 0, - "includeAll": false, - "label": null, - "multi": false, - "name": "namespace", - "options": [], - "query": "label_values(container_memory_usage_bytes{namespace=~\".+\",container_name!=\"POD\"},namespace)", - "refresh": 1, - "regex": "", - "sort": 1, - "tagValuesQuery": null, - "tags": [], - "tagsQuery": null, - "type": "query", - "useTags": false - }, - { - "allValue": ".+", - "current": {}, - "datasource": "${DS_OPENEBS}", - "hide": 0, - "includeAll": false, - "label": null, - "multi": false, - "name": "container", - "options": [], - "query": "label_values(container_memory_usage_bytes{namespace=~\"$namespace\",container_name!=\"POD\"},container_name)", - "refresh": 1, - "regex": "", - "sort": 1, - "tagValuesQuery": null, - "tags": [], - "tagsQuery": null, - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-3h", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "browser", - "title": "Kubernetes App Metrics", - "version": 37, - "description": "After selecting your namespace and container you get a wealth of metrics like request rate, error rate, response times, pod count, cpu and memory usage. You can view cpu and memory usage in a variety of ways, compared to the limit, compared to the request, per pod, average per pod, etc." -} diff --git a/k8s/openebs-kube-state-metrics.yaml b/k8s/openebs-kube-state-metrics.yaml deleted file mode 100644 index a9578d63b1..0000000000 --- a/k8s/openebs-kube-state-metrics.yaml +++ /dev/null @@ -1,60 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - k8s-app: kube-state-metrics - name: kube-state-metrics - namespace: openebs -spec: - selector: - matchLabels: - k8s-app: kube-state-metrics - replicas: 1 - template: - metadata: - labels: - k8s-app: kube-state-metrics - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: kube-state-metrics - image: quay.io/coreos/kube-state-metrics:v1.7.2 - ports: - - name: http-metrics - containerPort: 8080 - - name: telemetry - containerPort: 8081 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - initialDelaySeconds: 5 - timeoutSeconds: 5 - readinessProbe: - httpGet: - path: / - port: 8080 - initialDelaySeconds: 5 - timeoutSeconds: 5 ---- -apiVersion: v1 -kind: Service -metadata: - name: kube-state-metrics - namespace: openebs - labels: - k8s-app: kube-state-metrics - annotations: - prometheus.io/scrape: 'true' -spec: - ports: - - name: http-metrics - port: 8080 - targetPort: http-metrics - protocol: TCP - - name: telemetry - port: 8081 - targetPort: telemetry - protocol: TCP - selector: - k8s-app: kube-state-metrics diff --git a/k8s/openebs-kubelet-cAdvisor.json b/k8s/openebs-kubelet-cAdvisor.json deleted file mode 100644 index 1bd341403e..0000000000 --- a/k8s/openebs-kubelet-cAdvisor.json +++ /dev/null @@ -1,2079 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_OPENEBS", - "label": "Prometheus", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "" - }, - { - "type": "panel", - "id": "singlestat", - "name": "Singlestat", - "version": "" - }, - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "3.1.1" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "1.3.0" - } - ], - "id": null, - "title": "Kubernetes cAdvisor metrics", - "description": "Shows overall cluster CPU / Memory / Filesystem usage as well as individual pod, containers, systemd services statistics. Uses cAdvisor metrics only.", - "tags": [ - "kubernetes" - ], - "style": "dark", - "timezone": "browser", - "editable": true, - "hideControls": false, - "sharedCrosshair": false, - "rows": [ - { - "collapse": false, - "editable": true, - "height": "200px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)", - "thresholdLine": false - }, - "height": "200px", - "id": 32, - "isNew": true, - "legend": { - "alignAsTable": false, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": false, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{kubernetes_io_hostname=~\"^$Node$\"}[1m]))", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "Received", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{kubernetes_io_hostname=~\"^$Node$\"}[1m]))", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "Sent", - "metric": "network", - "refId": "B", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Network I/O pressure", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "transparent": false, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Network I/O pressure" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "format": "percent", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "180px", - "id": 4, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 4, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) / sum (machine_memory_bytes{kubernetes_io_hostname=~\"^$Node$\"}) * 100", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "65, 90", - "title": "Cluster memory usage", - "transparent": false, - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "percent", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "180px", - "id": 6, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 4, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) / sum (machine_cpu_cores{kubernetes_io_hostname=~\"^$Node$\"}) * 100", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "65, 90", - "title": "Cluster CPU usage (1m avg)", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "percent", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "180px", - "id": 7, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 4, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_fs_usage_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) / sum (container_fs_limit_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) * 100", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "", - "metric": "", - "refId": "A", - "step": 10 - } - ], - "thresholds": "65, 90", - "title": "Cluster filesystem usage", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 9, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "20%", - "prefix": "", - "prefixFontSize": "20%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Used", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 10, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (machine_memory_bytes{kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Total", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 11, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": " cores", - "postfixFontSize": "30%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m]))", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Used", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 12, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": " cores", - "postfixFontSize": "30%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (machine_cpu_cores{kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Total", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 13, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_fs_usage_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Used", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 14, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_fs_limit_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Total", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - } - ], - "showTitle": false, - "title": "Total usage" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "height": "", - "id": 17, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ pod_name }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Pods CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "transparent": false, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "showTitle": false, - "title": "Pods CPU usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "height": "", - "id": 23, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{systemd_service_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (systemd_service_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ systemd_service_name }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "System services CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "System services CPU usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "height": "", - "id": 24, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "hideEmpty": false, - "hideZero": false, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": null, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name=~\"^k8s_.*\",container_name!=\"POD\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (container_name, pod_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "pod: {{ pod_name }} | {{ container_name }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - }, - { - "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, name, image)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "container_cpu", - "refId": "B", - "step": 10 - }, - { - "expr": "sum (rate (container_cpu_usage_seconds_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "container_cpu", - "refId": "C", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Containers CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Containers CPU usage" - }, - { - "collapse": true, - "editable": true, - "height": "500px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 20, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ id }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "All processes CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "repeat": null, - "showTitle": false, - "title": "All processes CPU usage" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 25, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ pod_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Pods memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Pods memory usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 26, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{systemd_service_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}) by (systemd_service_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ systemd_service_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "System services memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "System services memory usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 27, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{image!=\"\",name=~\"^k8s_.*\",container_name!=\"POD\",kubernetes_io_hostname=~\"^$Node$\"}) by (container_name, pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "pod: {{ pod_name }} | {{ container_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - }, - { - "expr": "sum (container_memory_working_set_bytes{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}) by (kubernetes_io_hostname, name, image)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "container_memory_usage:sort_desc", - "refId": "B", - "step": 10 - }, - { - "expr": "sum (container_memory_working_set_bytes{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}) by (kubernetes_io_hostname, rkt_container_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "C", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Containers memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Containers memory usage" - }, - { - "collapse": true, - "editable": true, - "height": "500px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 28, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) by (id)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ id }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "All processes memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "All processes memory usage" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 16, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> {{ pod_name }}", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- {{ pod_name }}", - "metric": "network", - "refId": "B", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Pods network I/O (1m avg)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Pods network I/O" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 30, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (container_name, pod_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> pod: {{ pod_name }} | {{ container_name }}", - "metric": "network", - "refId": "B", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (container_name, pod_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- pod: {{ pod_name }} | {{ container_name }}", - "metric": "network", - "refId": "D", - "step": 10 - }, - { - "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, name, image)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, name, image)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "network", - "refId": "C", - "step": 10 - }, - { - "expr": "sum (rate (container_network_transmit_bytes_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "network", - "refId": "E", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "network", - "refId": "F", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Containers network I/O (1m avg)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Containers network I/O" - }, - { - "collapse": true, - "editable": true, - "height": "500px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 29, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> {{ id }}", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- {{ id }}", - "metric": "network", - "refId": "B", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "All processes network I/O (1m avg)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "All processes network I/O" - } - ], - "time": { - "from": "now-5m", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "templating": { - "list": [ - { - "allValue": ".*", - "current": {}, - "datasource": "${DS_OPENEBS}", - "hide": 0, - "includeAll": true, - "multi": false, - "name": "Node", - "options": [], - "query": "label_values(kubernetes_io_hostname)", - "refresh": 1, - "type": "query" - } - ] - }, - "annotations": { - "list": [] - }, - "refresh": "10s", - "schemaVersion": 12, - "version": 13, - "links": [], - "gnetId": 315 -} diff --git a/k8s/openebs-monitoring-pg.yaml b/k8s/openebs-monitoring-pg.yaml deleted file mode 100644 index a0fcda8305..0000000000 --- a/k8s/openebs-monitoring-pg.yaml +++ /dev/null @@ -1,460 +0,0 @@ -# The following file is intended for deployments that are not already -# configured with prometheus. This is a minified version of the config -# from the files present under ./openebs-monitoring/ -# -# Prometheus tunables -apiVersion: v1 -kind: ConfigMap -metadata: - name: openebs-prometheus-tunables - namespace: openebs -data: - storage-retention: 24h ---- -# Define the openebs prometheus jobs -kind: ConfigMap -metadata: - name: openebs-prometheus-config - namespace: openebs -apiVersion: v1 -data: - prometheus.yml: |- - global: - external_labels: - slave: slave1 - scrape_interval: 5s - evaluation_interval: 5s - rule_files: - - "/etc/prometheus-rules/*.rules" - alerting: - alertmanagers: - - scheme: http - path_prefix: / - static_configs: - - targets: ['alertmanager:9093'] - scrape_configs: - - job_name: 'prometheus' - scheme: http - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_label_name] - regex: openebs-prometheus-server - action: keep - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - - job_name: 'openebs-volumes' - scheme: http - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_label_monitoring] - regex: volume_exporter_prometheus - action: keep - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - # Below entry ending with vsm is deprecated and is maintained for - # backward compatibility purpose. - - source_labels: [__meta_kubernetes_pod_label_vsm] - action: replace - target_label: openebs_pv - # Below entry is the correct entry. Though the above and below entries - # are having same target_label as openebs_pv, only one of them will be - # valid for any release. - - source_labels: [__meta_kubernetes_pod_label_openebs_io_persistent_volume] - action: replace - target_label: openebs_pv - - source_labels: [__meta_kubernetes_pod_container_port_number] - action: drop - regex: '(.*)9501' - - source_labels: [__meta_kubernetes_pod_container_port_number] - action: drop - regex: '(.*)3260' - - source_labels: [__meta_kubernetes_pod_container_port_number] - action: drop - regex: '(.*)80' - - job_name: 'openebs-pools' - scheme: http - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_annotation_openebs_io_monitoring] - regex: pool_exporter_prometheus - action: keep - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - - source_labels: [__meta_kubernetes_pod_label_openebs_io_storage_pool_claim] - action: replace - target_label: storage_pool_claim - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: ${1}:${2} - target_label: __address__ - - job_name: 'node' - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - source_labels: [__meta_kubernetes_role] - action: replace - target_label: kubernetes_role - - source_labels: [__address__] - regex: '(.*):10250' - replacement: '${1}:9100' - target_label: __address__ - - source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname] - target_label: __instance__ - - source_labels: [job] - regex: 'kubernetes-(.*)' - replacement: '${1}' - target_label: name - - job_name: 'mysqld' - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_label_app] - regex: prometheus-mysql-exporter - action: keep - - job_name: 'kubernetes-nodes-cadvisor' - scrape_interval: 10s - scrape_timeout: 10s - scheme: https # remove if you want to scrape metrics on insecure port - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - # Only for Kubernetes ^1.7.3. - # See: https://github.com/prometheus/prometheus/issues/2916 - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - metric_relabel_configs: - - action: replace - source_labels: [id] - regex: '^/machine\.slice/machine-rkt\\x2d([^\\]+)\\.+/([^/]+)\.service$' - target_label: rkt_container_name - replacement: '${2}-${1}' - - action: replace - source_labels: [id] - regex: '^/system\.slice/(.+)\.service$' - target_label: systemd_service_name - replacement: '${1}' - - job_name: 'kube-state-metrics' - static_configs: - - targets: ['kube-state-metrics.openebs.svc.cluster.local:8080'] ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: openebs-prometheus-rules - labels: - name: openebs-prometheus-rules - namespace: openebs -data: - alert.rules: |- - groups: - - name: CPU - rules: - - alert: High CPU Load - expr: 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 85 - for: 1m - labels: - team: devops - annotations: - summary: "High CPU load (instance {{ $labels.instance }})" - description: "CPU load is > 80%\n VALUE = {{ $value }}\n LABELS: {{ $labels }}" - - - name: Memory - rules: - - alert: High Memory Utiliation - expr: (node_memory_MemFree_bytes + node_memory_Cached_bytes + node_memory_Buffers_bytes) / node_memory_MemTotal_bytes * 100 < 15 - for: 1m - labels: - team: devops - annotations: - summary: "Out of Memory (instance {{ $labels.instance }})" - description: "Memory is filling up (< 10% left)\n VALUE = {{ $value }}\n LABELS: {{ $labels }}" - - - name: Filesystem - rules: - - alert: No Root Disk Space Left - expr: node_filesystem_free_bytes{mountpoint ="/"} / node_filesystem_size_bytes{mountpoint ="/"} * 100 < 10 - for: 1m - labels: - team: devops - annotations: - summary: "Out of disk root space (instance {{ $labels.instance }})" - description: "Root Disk is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS: {{ $labels }}" - - alert: No Mounted Disk Space Left - expr: node_filesystem_free_bytes{mountpoint !="/"} / node_filesystem_size_bytes{mountpoint !="/"} * 100 < 10 - for: 1m - labels: - team: devops - annotations: - summary: "Out of mounted disk space (instance {{ $labels.instance }})" - description: "Mounted Disk is almost full (< 10% left) \n VALUE = {{ $value }}\n LABELS: {{ $labels }}" - - - name: Kubernetes - rules: - - alert: Pod CrashLoopBackOff - expr: kube_pod_container_status_waiting_reason{reason="CrashLoopBackOff"} > 0 - for: 1m - labels: - team: devops - annotations: - summary: "Pod '{{$labels.pod}}' in namespace '{{$labels.namespace}}' is in CrashLoopBackOff" - description: "A container named '{{$labels.container}}' in the pod '{{$labels.pod}}' in namespace '{{$labels.namespace}}' is experiencing restarts" - - - name: OpenEBS - rules: - - alert: OpenEBS Volume Not Available - expr: openebs_volume_status == 1 or openebs_volume_status == 4 - for: 1m - labels: - team: devops - annotations: - summary: "Volume '{{ $labels.openebs_pv }}' created for claim '{{ $labels.openebs_pvc }}' is not available" - description: "Volume '{{ $labels.openebs_pv }}' if offline, either because replica quorum is not met, target is not running or backend storage is lost" ---- -# prometheus-deployment -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: openebs-prometheus - namespace: openebs -spec: - replicas: 1 - template: - metadata: - labels: - name: openebs-prometheus-server - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: prometheus - image: prom/prometheus:v2.11.0 - args: - - "--config.file=/etc/prometheus/conf/prometheus.yml" - # Metrics are stored in an emptyDir volume which - # exists as long as the Pod is running on that Node. - # The data in an emptyDir volume is safe across container crashes. - - "--storage.tsdb.path=/prometheus" - # How long to retain samples in the local storage. - - "--storage.tsdb.retention=$(STORAGE_RETENTION)" - ports: - - containerPort: 9090 - env: - # environment vars are stored in prometheus-env configmap. - - name: STORAGE_RETENTION - valueFrom: - configMapKeyRef: - name: openebs-prometheus-tunables - key: storage-retention - resources: - requests: - # A memory request of 250M means it will try to ensure minimum - # 250MB RAM . - memory: "128M" - # A cpu request of 128m means it will try to ensure minimum - # .125 CPU; where 1 CPU means : - # 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading - cpu: "128m" - limits: - memory: "700M" - cpu: "500m" - - volumeMounts: - # prometheus config file stored in the given mountpath - - name: prometheus-server-volume - mountPath: /etc/prometheus/conf - # metrics collected by prometheus will be stored at the given mountpath. - - name: prometheus-storage-volume - mountPath: /prometheus - - name: prometheus-rules-volume - mountPath: /etc/prometheus-rules - volumes: - # Prometheus Config file will be stored in this volume - - name: prometheus-server-volume - configMap: - name: openebs-prometheus-config - # Alert rules will be storesin this volume - - name: prometheus-rules-volume - configMap: - name: openebs-prometheus-rules - # All the time series stored in this volume in form of .db file. - - name: prometheus-storage-volume - # containers in the Pod can all read and write the same files here. - emptyDir: {} ---- -# prometheus-service -apiVersion: v1 -kind: Service -metadata: - name: openebs-prometheus-service - namespace: openebs -spec: - selector: - name: openebs-prometheus-server - type: NodePort - ports: - - port: 80 # this Service's port (cluster-internal IP clusterIP) - targetPort: 9090 # pods expose this port - nodePort: 32514 - # Note that this Service will be visible as both NodeIP:nodePort and clusterIp:Port ---- -apiVersion: v1 -kind: Service -metadata: - name: openebs-grafana - namespace: openebs -spec: - type: NodePort - ports: - - port: 3000 - targetPort: 3000 - nodePort: 32515 - selector: - app: openebs-grafana ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - labels: - app: openebs-grafana - name: openebs-grafana - namespace: openebs -spec: - replicas: 1 - revisionHistoryLimit: 2 - template: - metadata: - labels: - app: openebs-grafana - spec: - containers: - - image: grafana/grafana:6.3.0 - name: grafana - ports: - - containerPort: 3000 - env: - - name: GF_AUTH_BASIC_ENABLED - value: "true" - - name: GF_AUTH_ANONYMOUS_ENABLED - value: "false" - livenessProbe: - httpGet: - path: / - port: 3000 - initialDelaySeconds: 30 - timeoutSeconds: 1 ---- -# node-exporter will be launch as daemonset. -apiVersion: extensions/v1beta1 -kind: DaemonSet -metadata: - name: node-exporter - namespace: openebs -spec: - template: - metadata: - labels: - app: node-exporter - name: node-exporter - spec: - containers: - #- image: prom/node-exporter:v0.18.1 - - image: quay.io/prometheus/node-exporter:v0.18.1 - args: - - --path.procfs=/host/proc - - --path.sysfs=/host/sys - - --path.rootfs=/host/root - - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib|run|boot|home/kubernetes/.+)($|/) - - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ - name: node-exporter - ports: - - containerPort: 9100 - hostPort: 9100 - name: scrape - resources: - requests: - # A memory request of 250M means it will try to ensure minimum - # 250MB RAM . - memory: "128M" - # A cpu request of 128m means it will try to ensure minimum - # .125 CPU; where 1 CPU means : - # 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading - cpu: "128m" - limits: - memory: "700M" - cpu: "500m" - volumeMounts: - # All the application data stored in data-disk - - name: proc - mountPath: /host/proc - readOnly: false - # Root disk is where OS(Node) is installed - - name: sys - mountPath: /host/sys - readOnly: false - - name: root - mountPath: /host/root - mountPropagation: HostToContainer - readOnly: true - # The Kubernetes scheduler’s default behavior works well for most cases - # -- for example, it ensures that pods are only placed on nodes that have - # sufficient free resources, it ties to spread pods from the same set - # (ReplicaSet, StatefulSet, etc.) across nodes, it tries to balance out - # the resource utilization of nodes, etc. - # - # But sometimes you want to control how your pods are scheduled. For example, - # perhaps you want to ensure that certain pods only schedule on nodes with - # specialized hardware, or you want to co-locate services that communicate - # frequently, or you want to dedicate a set of nodes to a particular set of - # users. Ultimately, you know much more about how your applications should be - # scheduled and deployed than Kubernetes ever will. - # - # “taints and tolerations,” allows you to mark (“taint”) a node so that no - # pods can schedule onto it unless a pod explicitly “tolerates” the taint. - # toleration is particularly useful for situations where most pods in - # the cluster should avoid scheduling onto the node. In our case we want - # node-exporter to run on master node also i.e, we want to collect metrics - # from master node. That's why tolerations added. - # if removed master's node metrics can't be scrapped by prometheus. - tolerations: - - effect: NoSchedule - operator: Exists - volumes: - # A hostPath volume mounts a file or directory from the host node’s - # filesystem.For example, some uses for a hostPath are: - # running a container that needs access to Docker internals; use a hostPath - # of /var/lib/docker - # running cAdvisor in a container; use a hostPath of /dev/cgroups - - name: proc - hostPath: - path: /proc - - name: sys - hostPath: - path: /sys - - name: root - hostPath: - path: / - hostNetwork: true - hostPID: true diff --git a/k8s/openebs-node-exporter.json b/k8s/openebs-node-exporter.json deleted file mode 100644 index 0f98133dfc..0000000000 --- a/k8s/openebs-node-exporter.json +++ /dev/null @@ -1,1763 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_OPENEBS", - "label": "openebs", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "5.2.0" - }, - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "5.0.0" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "5.0.0" - } - ], - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": "-- Grafana --", - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "name": "Annotations & Alerts", - "type": "dashboard" - } - ] - }, - "description": "This for the Node Exporter version 0.16.0 or later.", - "editable": true, - "gnetId": null, - "graphTooltip": 0, - "id": 2, - "iteration": 1566449418510, - "links": [], - "panels": [ - { - "collapsed": false, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 0 - }, - "id": 30, - "panels": [], - "repeat": null, - "title": "Node Filesystem Stats", - "type": "row" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 6, - "w": 4, - "x": 0, - "y": 1 - }, - "id": 32, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "options": {}, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false, - "ymax": null, - "ymin": null - }, - "tableColumn": "", - "targets": [ - { - "expr": "node_filesystem_size_bytes {instance=~\"$instance\",mountpoint=~\"/\",fstype=~\"ext4|xfs\"}", - "instant": true, - "intervalFactor": 2, - "legendFormat": "{{mountpoint}}", - "refId": "A" - } - ], - "thresholds": "", - "timeFrom": null, - "timeShift": null, - "title": "Total Disk Space", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "avg" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS}", - "format": "percent", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 6, - "w": 4, - "x": 4, - "y": 1 - }, - "id": 34, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "options": {}, - "pluginVersion": "6.3.0", - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": true, - "ymax": null, - "ymin": null - }, - "tableColumn": "", - "targets": [ - { - "expr": "100 - ((node_filesystem_avail_bytes{instance=~\"$instance\",mountpoint=\"/\",fstype=~\"ext4|xfs\"} * 100) / node_filesystem_size_bytes {instance=~\"$instance\",mountpoint=\"/\",fstype=~\"ext4|xfs\"})", - "interval": "10s", - "intervalFactor": 2, - "refId": "A" - } - ], - "thresholds": "70,90", - "timeFrom": null, - "timeShift": null, - "title": "Root partition usage", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "format": "short", - "gauge": { - "maxValue": 10000, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 6, - "w": 4, - "x": 8, - "y": 1 - }, - "id": 36, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "options": {}, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": true, - "ymax": null, - "ymin": null - }, - "tableColumn": "", - "targets": [ - { - "expr": "node_filefd_allocated{instance=~\"$instance\"}", - "interval": "10s", - "intervalFactor": 2, - "refId": "A" - } - ], - "thresholds": "7000,9000", - "timeFrom": null, - "timeShift": null, - "title": "Currently Open File Descriptor", - "type": "singlestat", - "valueFontSize": "70%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "fill": 3, - "fillGradient": 0, - "gridPos": { - "h": 6, - "w": 12, - "x": 12, - "y": 1 - }, - "id": 40, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "100 - ((node_filesystem_avail_bytes{instance=~\"$instance\",mountpoint!~\"/\",fstype=~\"ext4|xfs\"} * 100) / node_filesystem_size_bytes {instance=~\"$instance\",mountpoint!~\"/\",fstype=~\"ext4|xfs\"})", - "interval": "10s", - "intervalFactor": 2, - "legendFormat": "{{mountpoint}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Mounted Disk Utilization", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percent", - "label": null, - "logBase": 1, - "max": "100", - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": true, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 7 - }, - "id": 20, - "panels": [ - { - "cacheTimeout": null, - "colorBackground": false, - "colorPostfix": false, - "colorPrefix": false, - "colorValue": false, - "colors": [ - "#96D98D", - "#5794F2", - "rgb(179, 129, 204)" - ], - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 6, - "w": 2, - "x": 0, - "y": 2 - }, - "id": 26, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "maxPerRow": 6, - "nullPointMode": "connected", - "nullText": null, - "options": {}, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "repeat": null, - "repeatDirection": "h", - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false, - "ymax": null, - "ymin": null - }, - "tableColumn": "", - "targets": [ - { - "expr": "count(count(node_cpu_seconds_total{instance=~\"$instance\"}) without (mode,instance,job)) without (cpu) ", - "intervalFactor": 2, - "refId": "A" - } - ], - "thresholds": "", - "timeFrom": null, - "timeShift": null, - "title": "CPU Cores", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "avg" - }, - { - "aliasColors": { - "Available": "blue", - "Used": "dark-red" - }, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "fill": 6, - "fillGradient": 0, - "gridPos": { - "h": 6, - "w": 10, - "x": 2, - "y": 2 - }, - "id": 28, - "legend": { - "avg": false, - "current": true, - "max": false, - "min": false, - "show": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 3, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "node_memory_MemTotal_bytes{instance=~\"$instance\"}", - "intervalFactor": 2, - "legendFormat": "Total memory", - "refId": "A" - }, - { - "expr": "node_memory_MemTotal_bytes{instance=~\"$instance\"} - node_memory_MemAvailable_bytes{instance=~\"$instance\"}", - "intervalFactor": 2, - "legendFormat": "Used", - "refId": "B" - }, - { - "expr": "node_memory_MemAvailable_bytes{instance=~\"$instance\"}", - "intervalFactor": 2, - "legendFormat": "Available", - "refId": "D" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Memory Stats ", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "decimals": 2, - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 6, - "w": 12, - "x": 12, - "y": 2 - }, - "id": 22, - "legend": { - "avg": true, - "current": false, - "hideEmpty": false, - "hideZero": false, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "avg(irate(node_cpu_seconds_total{instance=~\"$instance\",mode=\"system\"}[1m]))", - "intervalFactor": 2, - "legendFormat": "system ", - "refId": "A" - }, - { - "expr": "avg(irate(node_cpu_seconds_total{instance=~\"$instance\",mode=\"user\"}[1m]))", - "intervalFactor": 2, - "legendFormat": "user", - "refId": "B" - }, - { - "expr": "avg(irate(node_cpu_seconds_total{instance=~\"$instance\",mode=\"idle\"}[1m]))", - "intervalFactor": 2, - "legendFormat": "idle", - "refId": "C" - }, - { - "expr": "irate(node_disk_io_time_seconds_total{instance=~\"$instance\"}[1m])", - "intervalFactor": 2, - "legendFormat": "{{device}}_% of I/O operations per second", - "refId": "F" - }, - { - "expr": "avg(irate(node_cpu_seconds_total{instance=~\"$instance\",mode=\"iowait\"}[1m]))", - "intervalFactor": 2, - "legendFormat": "iowait", - "refId": "D" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "CPU Stats", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 8 - }, - "id": 24, - "legend": { - "avg": false, - "current": true, - "max": false, - "min": false, - "show": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "maxPerRow": 6, - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "repeat": null, - "repeatDirection": "h", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "node_load1{instance=~\"$instance\"}", - "intervalFactor": 2, - "legendFormat": "1 Min", - "refId": "A" - }, - { - "expr": "node_load5{instance=~\"$instance\"}", - "intervalFactor": 2, - "legendFormat": "5 Min", - "refId": "B" - }, - { - "expr": "node_load15{instance=~\"$instance\"}", - "intervalFactor": 2, - "legendFormat": "15 Min", - "refId": "C" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Load", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 8 - }, - "id": 38, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(node_context_switches_total{instance=~\"$instance\"}[5m])", - "intervalFactor": 2, - "legendFormat": "Context Switches", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Context Switches", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "title": "Node CPU, Memory Stats", - "type": "row" - }, - { - "collapsed": true, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 8 - }, - "id": 16, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 3 - }, - "id": 12, - "legend": { - "alignAsTable": false, - "avg": false, - "current": false, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "maxPerRow": 6, - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "repeat": null, - "repeatDirection": "h", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "rate(node_network_transmit_bytes_total{instance=\"$instance\",device!~\"lo\"}[5m])", - "intervalFactor": 2, - "legendFormat": "{{device}} out", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Network Transmitted", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bits", - "label": "", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 3 - }, - "id": 18, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "rate(node_network_receive_bytes_total{instance=\"$instance\",device!~\"lo\"}[5m])", - "intervalFactor": 2, - "legendFormat": "{{device}} in", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Network Received", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "title": "Node Network Stats", - "type": "row" - }, - { - "collapsed": true, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 9 - }, - "id": 14, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "fillGradient": 0, - "grid": {}, - "gridPos": { - "h": 7, - "w": 8, - "x": 0, - "y": 4 - }, - "id": 3, - "legend": { - "alignAsTable": false, - "avg": false, - "current": false, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "links": [], - "maxPerRow": 6, - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "repeat": null, - "repeatDirection": "h", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(node_disk_io_time_seconds_total{instance=~'$instance',device!~'^(md\\\\\\\\d+$|dm-)'}[5m])", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "{{device}}", - "refId": "A", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Disk Utilization per Device", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "fillGradient": 0, - "grid": {}, - "gridPos": { - "h": 7, - "w": 8, - "x": 8, - "y": 4 - }, - "id": 5, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(node_disk_writes_completed_total{job='node',instance='$instance',device!~'^(md\\\\d+$|dm-)'}[5m])", - "format": "time_series", - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{device}}", - "refId": "A", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Writes", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "decimals": null, - "format": "wps", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "fillGradient": 0, - "grid": {}, - "gridPos": { - "h": 7, - "w": 8, - "x": 16, - "y": 4 - }, - "id": 4, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(node_disk_reads_completed_total{job='node',instance='$instance',device!~'^(md\\\\d+$|dm-)'}[5m])", - "format": "time_series", - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{device}}", - "refId": "A", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Reads", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "decimals": null, - "format": "rps", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "fillGradient": 0, - "grid": {}, - "gridPos": { - "h": 7, - "w": 8, - "x": 0, - "y": 11 - }, - "id": 6, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "irate(node_disk_io_now{job='node',instance='$instance',device!~'^(md\\\\d+$|dm-)'}[5m])", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{device}}", - "refId": "A", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "IO Wait Queue", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "decimals": null, - "format": "ops", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "fillGradient": 0, - "grid": {}, - "gridPos": { - "h": 7, - "w": 8, - "x": 8, - "y": 11 - }, - "id": 8, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "increase(node_disk_write_time_seconds_total{job='node',instance='$instance',device!~'^(md\\\\d+$|dm-)'}[5m])/increase(node_disk_writes_completed_total{job='node',instance='$instance',device!~'^(md\\\\d+$|dm-)'}[5m])", - "format": "time_series", - "hide": false, - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{device}}", - "refId": "A", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Write Latency", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "decimals": null, - "format": "s", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "editable": true, - "error": false, - "fill": 1, - "fillGradient": 0, - "grid": {}, - "gridPos": { - "h": 7, - "w": 8, - "x": 16, - "y": 11 - }, - "id": 7, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "increase(node_disk_read_time_seconds_total{job='node',instance='$instance',device!~'^(md\\\\d+$|dm-)'}[5m])/increase(node_disk_reads_completed_total{job='node',instance='$instance',device!~'^(md\\\\d+$|dm-)'}[5m])", - "format": "time_series", - "hide": false, - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{device}}", - "refId": "A", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Read Latency", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "decimals": null, - "format": "s", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "title": "Node Disk Stats", - "type": "row" - } - ], - "refresh": false, - "schemaVersion": 19, - "style": "dark", - "tags": [], - "templating": { - "list": [ - { - "allFormat": "glob", - "allValue": null, - "current": {}, - "datasource": "${DS_OPENEBS}", - "definition": "", - "hide": 0, - "hideLabel": false, - "includeAll": false, - "label": "Machine", - "multi": false, - "multiFormat": "glob", - "name": "instance", - "options": [], - "query": "up{job=\"node\"}", - "refresh": 1, - "regex": ".*instance=\"(.*?)\".*", - "skipUrlSync": false, - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-15m", - "to": "now" - }, - "timepicker": { - "now": true, - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "browser", - "title": "Node Stats", - "uid": "CZAzyIdZk", - "version": 2 -} diff --git a/k8s/openebs-operator.yaml b/k8s/openebs-operator.yaml deleted file mode 100644 index aa405c6faa..0000000000 --- a/k8s/openebs-operator.yaml +++ /dev/null @@ -1,1181 +0,0 @@ -# -# DEPRECATION NOTICE -# This operator file is deprecated in 2.11.0 in favour of individual operators -# for each storage engine and the file will be removed in version 3.0.0 -# -# Further specific components can be deploy using there individual operator yamls -# -# To deploy cStor: -# https://github.com/openebs/charts/blob/gh-pages/cstor-operator.yaml -# -# To deploy Jiva: -# https://github.com/openebs/charts/blob/gh-pages/jiva-operator.yaml -# -# To deploy Dynamic hostpath localpv provisioner: -# https://github.com/openebs/charts/blob/gh-pages/hostpath-operator.yaml -# -# -# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules -# NOTE: On GKE, deploy the openebs-operator.yaml in admin context - -# Create the OpenEBS namespace -apiVersion: v1 -kind: Namespace -metadata: - name: openebs ---- -# Create Maya Service Account -apiVersion: v1 -kind: ServiceAccount -metadata: - name: openebs-maya-operator - namespace: openebs ---- -# Define Role that allows operations on K8s pods/deployments -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: openebs-maya-operator -rules: -- apiGroups: ["*"] - resources: ["nodes", "nodes/proxy"] - verbs: ["*"] -- apiGroups: ["*"] - resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"] - verbs: ["*"] -- apiGroups: ["*"] - resources: ["statefulsets", "daemonsets"] - verbs: ["*"] -- apiGroups: ["*"] - resources: ["resourcequotas", "limitranges"] - verbs: ["list", "watch"] -- apiGroups: ["*"] - resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "certificatesigningrequests"] - verbs: ["list", "watch"] -- apiGroups: ["*"] - resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"] - verbs: ["*"] -- apiGroups: ["volumesnapshot.external-storage.k8s.io"] - resources: ["volumesnapshots", "volumesnapshotdatas"] - verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] -- apiGroups: ["apiextensions.k8s.io"] - resources: ["customresourcedefinitions"] - verbs: [ "get", "list", "create", "update", "delete", "patch"] -- apiGroups: ["openebs.io"] - resources: [ "*"] - verbs: ["*" ] -- apiGroups: ["cstor.openebs.io"] - resources: [ "*"] - verbs: ["*" ] -- apiGroups: ["coordination.k8s.io"] - resources: ["leases"] - verbs: ["get", "watch", "list", "delete", "update", "create"] -- apiGroups: ["admissionregistration.k8s.io"] - resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"] - verbs: ["get", "create", "list", "delete", "update", "patch"] -- nonResourceURLs: ["/metrics"] - verbs: ["get"] -- apiGroups: ["*"] - resources: ["poddisruptionbudgets"] - verbs: ["get", "list", "create", "delete", "watch"] ---- -# Bind the Service Account with the Role Privileges. -# TODO: Check if default account also needs to be there -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: openebs-maya-operator -subjects: -- kind: ServiceAccount - name: openebs-maya-operator - namespace: openebs -roleRef: - kind: ClusterRole - name: openebs-maya-operator - apiGroup: rbac.authorization.k8s.io ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: maya-apiserver - namespace: openebs - labels: - name: maya-apiserver - openebs.io/component-name: maya-apiserver - openebs.io/version: dev -spec: - selector: - matchLabels: - name: maya-apiserver - openebs.io/component-name: maya-apiserver - replicas: 1 - strategy: - type: Recreate - rollingUpdate: null - template: - metadata: - labels: - name: maya-apiserver - openebs.io/component-name: maya-apiserver - openebs.io/version: dev - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: maya-apiserver - imagePullPolicy: IfNotPresent - image: openebs/m-apiserver:ci - ports: - - containerPort: 5656 - env: - # OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for maya api server version 0.5.2 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - # OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for maya api server version 0.5.2 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://172.28.128.3:8080" - # OPENEBS_NAMESPACE provides the namespace of this deployment as an - # environment variable - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as - # environment variable - - name: OPENEBS_SERVICE_ACCOUNT - valueFrom: - fieldRef: - fieldPath: spec.serviceAccountName - # OPENEBS_MAYA_POD_NAME provides the name of this pod as - # environment variable - - name: OPENEBS_MAYA_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - # If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default - # storageclass and storagepool will not be created. - - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - value: "true" - # OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be - # configured as a part of openebs installation. - # If "true" a default cstor sparse pool will be configured, if "false" it will not be configured. - # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - # is set to true - - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL - value: "false" - # OPENEBS_IO_INSTALL_CRD environment variable is used to enable/disable CRD installation - # from Maya API server. By default the CRDs will be installed - # - name: OPENEBS_IO_INSTALL_CRD - # value: "true" - # OPENEBS_IO_BASE_DIR is used to configure base directory for openebs on host path. - # Where OpenEBS can store required files. Default base path will be /var/openebs - # - name: OPENEBS_IO_BASE_DIR - # value: "/var/openebs" - # OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath - # to be used for saving the shared content between the side cars - # of cstor volume pod. - # The default path used is /var/openebs/sparse - #- name: OPENEBS_IO_CSTOR_TARGET_DIR - # value: "/var/openebs/sparse" - # OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath - # to be used for saving the shared content between the side cars - # of cstor pool pod. This ENV is also used to indicate the location - # of the sparse devices. - # The default path used is /var/openebs/sparse - #- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR - # value: "/var/openebs/sparse" - # OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath - # to be used for default Jiva StoragePool loaded by OpenEBS - # The default path used is /var/openebs - # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - # is set to true - #- name: OPENEBS_IO_JIVA_POOL_DIR - # value: "/var/openebs" - # OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath - # to be used for default openebs-hostpath storageclass loaded by OpenEBS - # The default path used is /var/openebs/local - # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG - # is set to true - #- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR - # value: "/var/openebs/local" - - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE - value: "openebs/jiva:ci" - - name: OPENEBS_IO_JIVA_REPLICA_IMAGE - value: "openebs/jiva:ci" - - name: OPENEBS_IO_JIVA_REPLICA_COUNT - value: "3" - - name: OPENEBS_IO_CSTOR_TARGET_IMAGE - value: "openebs/cstor-istgt:ci" - - name: OPENEBS_IO_CSTOR_POOL_IMAGE - value: "openebs/cstor-pool:ci" - - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE - value: "openebs/cstor-pool-mgmt:ci" - - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE - value: "openebs/cstor-volume-mgmt:ci" - - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE - value: "openebs/m-exporter:ci" - - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE - value: "openebs/m-exporter:ci" - - name: OPENEBS_IO_HELPER_IMAGE - value: "openebs/linux-utils:ci" - # OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage - # events to Google Analytics - - name: OPENEBS_IO_ENABLE_ANALYTICS - value: "false" - - name: OPENEBS_IO_INSTALLER_TYPE - value: "openebs-operator" - # OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours) - # for periodic ping events sent to Google Analytics. - # Default is 24h. - # Minimum is 1h. You can convert this to weekly by setting 168h - #- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL - # value: "24h" - livenessProbe: - exec: - command: - - /usr/local/bin/mayactl - - version - initialDelaySeconds: 30 - periodSeconds: 60 - readinessProbe: - exec: - command: - - /usr/local/bin/mayactl - - version - initialDelaySeconds: 30 - periodSeconds: 60 ---- -apiVersion: v1 -kind: Service -metadata: - name: maya-apiserver-service - namespace: openebs - labels: - openebs.io/component-name: maya-apiserver-svc -spec: - ports: - - name: api - port: 5656 - protocol: TCP - targetPort: 5656 - selector: - name: maya-apiserver - sessionAffinity: None ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: openebs-provisioner - namespace: openebs - labels: - name: openebs-provisioner - openebs.io/component-name: openebs-provisioner - openebs.io/version: dev -spec: - selector: - matchLabels: - name: openebs-provisioner - openebs.io/component-name: openebs-provisioner - replicas: 1 - strategy: - type: Recreate - rollingUpdate: null - template: - metadata: - labels: - name: openebs-provisioner - openebs.io/component-name: openebs-provisioner - openebs.io/version: dev - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: openebs-provisioner - imagePullPolicy: IfNotPresent - image: openebs/openebs-k8s-provisioner:ci - env: - # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://10.128.0.12:8080" - # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, - # that provisioner should forward the volume create/delete requests. - # If not present, "maya-apiserver-service" will be used for lookup. - # This is supported for openebs provisioner version 0.5.3-RC1 onwards - #- name: OPENEBS_MAYA_SERVICE_NAME - # value: "maya-apiserver-apiservice" - # LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default - # leader election is enabled. - #- name: LEADER_ELECTION_ENABLED - # value: "true" - # Process name used for matching is limited to the 15 characters - # present in the pgrep output. - # So fullname can't be used here with pgrep (>15 chars).A regular expression - # that matches the entire command name has to specified. - # Anchor `^` : matches any string that starts with `openebs-provis` - # `.*`: matches any string that has `openebs-provis` followed by zero or more char - livenessProbe: - exec: - command: - - sh - - -c - - test `pgrep -c "^openebs-provisi.*"` = 1 - initialDelaySeconds: 30 - periodSeconds: 60 ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: openebs-snapshot-operator - namespace: openebs - labels: - name: openebs-snapshot-operator - openebs.io/component-name: openebs-snapshot-operator - openebs.io/version: dev -spec: - selector: - matchLabels: - name: openebs-snapshot-operator - openebs.io/component-name: openebs-snapshot-operator - replicas: 1 - strategy: - type: Recreate - template: - metadata: - labels: - name: openebs-snapshot-operator - openebs.io/component-name: openebs-snapshot-operator - openebs.io/version: dev - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: snapshot-controller - image: openebs/snapshot-controller:ci - imagePullPolicy: IfNotPresent - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # Process name used for matching is limited to the 15 characters - # present in the pgrep output. - # So fullname can't be used here with pgrep (>15 chars).A regular expression - # that matches the entire command name has to specified. - # Anchor `^` : matches any string that starts with `snapshot-contro` - # `.*`: matches any string that has `snapshot-contro` followed by zero or more char - livenessProbe: - exec: - command: - - sh - - -c - - test `pgrep -c "^snapshot-contro.*"` = 1 - initialDelaySeconds: 30 - periodSeconds: 60 - # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, - # that snapshot controller should forward the snapshot create/delete requests. - # If not present, "maya-apiserver-service" will be used for lookup. - # This is supported for openebs provisioner version 0.5.3-RC1 onwards - #- name: OPENEBS_MAYA_SERVICE_NAME - # value: "maya-apiserver-apiservice" - - name: snapshot-provisioner - image: openebs/snapshot-provisioner:ci - imagePullPolicy: IfNotPresent - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, - # that snapshot provisioner should forward the clone create/delete requests. - # If not present, "maya-apiserver-service" will be used for lookup. - # This is supported for openebs provisioner version 0.5.3-RC1 onwards - #- name: OPENEBS_MAYA_SERVICE_NAME - # value: "maya-apiserver-apiservice" - # LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default - # leader election is enabled. - #- name: LEADER_ELECTION_ENABLED - # value: "true" - # Process name used for matching is limited to the 15 characters - # present in the pgrep output. - # So fullname can't be used here with pgrep (>15 chars).A regular expression - # that matches the entire command name has to specified. - # Anchor `^` : matches any string that starts with `snapshot-provis` - # `.*`: matches any string that has `snapshot-provis` followed by zero or more char - livenessProbe: - exec: - command: - - sh - - -c - - test `pgrep -c "^snapshot-provis.*"` = 1 - initialDelaySeconds: 30 - periodSeconds: 60 ---- -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - annotations: - controller-gen.kubebuilder.io/version: v0.5.0 - creationTimestamp: null - name: blockdevices.openebs.io -spec: - group: openebs.io - names: - kind: BlockDevice - listKind: BlockDeviceList - plural: blockdevices - shortNames: - - bd - singular: blockdevice - scope: Namespaced - versions: - - additionalPrinterColumns: - - jsonPath: .spec.nodeAttributes.nodeName - name: NodeName - type: string - - jsonPath: .spec.path - name: Path - priority: 1 - type: string - - jsonPath: .spec.filesystem.fsType - name: FSType - priority: 1 - type: string - - jsonPath: .spec.capacity.storage - name: Size - type: string - - jsonPath: .status.claimState - name: ClaimState - type: string - - jsonPath: .status.state - name: Status - type: string - - jsonPath: .metadata.creationTimestamp - name: Age - type: date - name: v1alpha1 - schema: - openAPIV3Schema: - description: BlockDevice is the Schema for the blockdevices API - properties: - apiVersion: - description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' - type: string - kind: - description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' - type: string - metadata: - type: object - spec: - description: DeviceSpec defines the properties and runtime status of a BlockDevice - properties: - aggregateDevice: - description: AggregateDevice was intended to store the hierarchical information in cases of LVM. However this is currently not implemented and may need to be re-looked into for better design. To be deprecated - type: string - capacity: - description: Capacity - properties: - logicalSectorSize: - description: LogicalSectorSize is blockdevice logical-sector size in bytes - format: int32 - type: integer - physicalSectorSize: - description: PhysicalSectorSize is blockdevice physical-Sector size in bytes - format: int32 - type: integer - storage: - description: Storage is the blockdevice capacity in bytes - format: int64 - type: integer - required: - - storage - type: object - claimRef: - description: ClaimRef is the reference to the BDC which has claimed this BD - properties: - apiVersion: - description: API version of the referent. - type: string - fieldPath: - description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' - type: string - kind: - description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' - type: string - name: - description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' - type: string - namespace: - description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' - type: string - resourceVersion: - description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' - type: string - uid: - description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' - type: string - type: object - details: - description: Details contain static attributes of BD like model,serial, and so forth - properties: - compliance: - description: Compliance is standards/specifications version implemented by device firmware such as SPC-1, SPC-2, etc - type: string - deviceType: - description: DeviceType represents the type of device like sparse, disk, partition, lvm, crypt - enum: - - disk - - partition - - sparse - - loop - - lvm - - crypt - - dm - - mpath - type: string - driveType: - description: DriveType is the type of backing drive, HDD/SSD - enum: - - HDD - - SSD - - Unknown - - "" - type: string - firmwareRevision: - description: FirmwareRevision is the disk firmware revision - type: string - hardwareSectorSize: - description: HardwareSectorSize is the hardware sector size in bytes - format: int32 - type: integer - logicalBlockSize: - description: LogicalBlockSize is the logical block size in bytes reported by /sys/class/block/sda/queue/logical_block_size - format: int32 - type: integer - model: - description: Model is model of disk - type: string - physicalBlockSize: - description: PhysicalBlockSize is the physical block size in bytes reported by /sys/class/block/sda/queue/physical_block_size - format: int32 - type: integer - serial: - description: Serial is serial number of disk - type: string - vendor: - description: Vendor is vendor of disk - type: string - type: object - devlinks: - description: DevLinks contains soft links of a block device like /dev/by-id/... /dev/by-uuid/... - items: - description: DeviceDevLink holds the mapping between type and links like by-id type or by-path type link - properties: - kind: - description: Kind is the type of link like by-id or by-path. - enum: - - by-id - - by-path - type: string - links: - description: Links are the soft links - items: - type: string - type: array - type: object - type: array - filesystem: - description: FileSystem contains mountpoint and filesystem type - properties: - fsType: - description: Type represents the FileSystem type of the block device - type: string - mountPoint: - description: MountPoint represents the mountpoint of the block device. - type: string - type: object - nodeAttributes: - description: NodeAttributes has the details of the node on which BD is attached - properties: - nodeName: - description: NodeName is the name of the Kubernetes node resource on which the device is attached - type: string - type: object - parentDevice: - description: "ParentDevice was intended to store the UUID of the parent Block Device as is the case for partitioned block devices. \n For example: /dev/sda is the parent for /dev/sda1 To be deprecated" - type: string - partitioned: - description: Partitioned represents if BlockDevice has partitions or not (Yes/No) Currently always default to No. To be deprecated - enum: - - "Yes" - - "No" - type: string - path: - description: Path contain devpath (e.g. /dev/sdb) - type: string - required: - - capacity - - devlinks - - nodeAttributes - - path - type: object - status: - description: DeviceStatus defines the observed state of BlockDevice - properties: - claimState: - description: ClaimState represents the claim state of the block device - enum: - - Claimed - - Unclaimed - - Released - type: string - state: - description: State is the current state of the blockdevice (Active/Inactive/Unknown) - enum: - - Active - - Inactive - - Unknown - type: string - required: - - claimState - - state - type: object - type: object - served: true - storage: true - subresources: {} -status: - acceptedNames: - kind: "" - plural: "" - conditions: [] - storedVersions: [] - ---- -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - annotations: - controller-gen.kubebuilder.io/version: v0.5.0 - creationTimestamp: null - name: blockdeviceclaims.openebs.io -spec: - group: openebs.io - names: - kind: BlockDeviceClaim - listKind: BlockDeviceClaimList - plural: blockdeviceclaims - shortNames: - - bdc - singular: blockdeviceclaim - scope: Namespaced - versions: - - additionalPrinterColumns: - - jsonPath: .spec.blockDeviceName - name: BlockDeviceName - type: string - - jsonPath: .status.phase - name: Phase - type: string - - jsonPath: .metadata.creationTimestamp - name: Age - type: date - name: v1alpha1 - schema: - openAPIV3Schema: - description: BlockDeviceClaim is the Schema for the blockdeviceclaims API - properties: - apiVersion: - description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' - type: string - kind: - description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' - type: string - metadata: - type: object - spec: - description: DeviceClaimSpec defines the request details for a BlockDevice - properties: - blockDeviceName: - description: BlockDeviceName is the reference to the block-device backing this claim - type: string - blockDeviceNodeAttributes: - description: BlockDeviceNodeAttributes is the attributes on the node from which a BD should be selected for this claim. It can include nodename, failure domain etc. - properties: - hostName: - description: HostName represents the hostname of the Kubernetes node resource where the BD should be present - type: string - nodeName: - description: NodeName represents the name of the Kubernetes node resource where the BD should be present - type: string - type: object - deviceClaimDetails: - description: Details of the device to be claimed - properties: - allowPartition: - description: AllowPartition represents whether to claim a full block device or a device that is a partition - type: boolean - blockVolumeMode: - description: 'BlockVolumeMode represents whether to claim a device in Block mode or Filesystem mode. These are use cases of BlockVolumeMode: 1) Not specified: VolumeMode check will not be effective 2) VolumeModeBlock: BD should not have any filesystem or mountpoint 3) VolumeModeFileSystem: BD should have a filesystem and mountpoint. If DeviceFormat is specified then the format should match with the FSType in BD' - type: string - formatType: - description: Format of the device required, eg:ext4, xfs - type: string - type: object - deviceType: - description: DeviceType represents the type of drive like SSD, HDD etc., - nullable: true - type: string - hostName: - description: Node name from where blockdevice has to be claimed. To be deprecated. Use NodeAttributes.HostName instead - type: string - resources: - description: Resources will help with placing claims on Capacity, IOPS - properties: - requests: - additionalProperties: - anyOf: - - type: integer - - type: string - pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ - x-kubernetes-int-or-string: true - description: 'Requests describes the minimum resources required. eg: if storage resource of 10G is requested minimum capacity of 10G should be available TODO for validating' - type: object - required: - - requests - type: object - selector: - description: Selector is used to find block devices to be considered for claiming - properties: - matchExpressions: - description: matchExpressions is a list of label selector requirements. The requirements are ANDed. - items: - description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. - properties: - key: - description: key is the label key that the selector applies to. - type: string - operator: - description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. - type: string - values: - description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. - items: - type: string - type: array - required: - - key - - operator - type: object - type: array - matchLabels: - additionalProperties: - type: string - description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. - type: object - type: object - type: object - status: - description: DeviceClaimStatus defines the observed state of BlockDeviceClaim - properties: - phase: - description: Phase represents the current phase of the claim - type: string - required: - - phase - type: object - type: object - served: true - storage: true - subresources: {} -status: - acceptedNames: - kind: "" - plural: "" - conditions: [] - storedVersions: [] ---- -# This is the node-disk-manager related config. -# It can be used to customize the disks probes and filters -apiVersion: v1 -kind: ConfigMap -metadata: - name: openebs-ndm-config - namespace: openebs - labels: - openebs.io/component-name: ndm-config -data: - # udev-probe is default or primary probe which should be enabled to run ndm - # filterconfigs contails configs of filters - in their form fo include - # and exclude comma separated strings - node-disk-manager.config: | - probeconfigs: - - key: udev-probe - name: udev probe - state: true - - key: seachest-probe - name: seachest probe - state: false - - key: smart-probe - name: smart probe - state: true - filterconfigs: - - key: os-disk-exclude-filter - name: os disk exclude filter - state: true - exclude: "/,/etc/hosts,/boot" - - key: vendor-filter - name: vendor filter - state: true - include: "" - exclude: "CLOUDBYT,OpenEBS" - - key: path-filter - name: path filter - state: true - include: "" - exclude: "/dev/loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/md,/dev/dm-,/dev/rbd,/dev/zd" ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: openebs-ndm - namespace: openebs - labels: - name: openebs-ndm - openebs.io/component-name: ndm - openebs.io/version: dev -spec: - selector: - matchLabels: - name: openebs-ndm - openebs.io/component-name: ndm - updateStrategy: - type: RollingUpdate - template: - metadata: - labels: - name: openebs-ndm - openebs.io/component-name: ndm - openebs.io/version: dev - spec: - # By default the node-disk-manager will be run on all kubernetes nodes - # If you would like to limit this to only some nodes, say the nodes - # that have storage attached, you could label those node and use - # nodeSelector. - # - # e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node" - # kubectl label node "openebs.io/nodegroup"="storage-node" - #nodeSelector: - # "openebs.io/nodegroup": "storage-node" - serviceAccountName: openebs-maya-operator - hostNetwork: true - # host PID is used to check status of iSCSI Service when the NDM - # API service is enabled - #hostPID: true - containers: - - name: node-disk-manager - image: openebs/node-disk-manager:ci - args: - - -v=4 - # The feature-gate is used to enable the new UUID algorithm. - - --feature-gates="GPTBasedUUID" - # Detect mount point and filesystem changes wihtout restart. - # Uncomment the line below to enable the feature. - # --feature-gates="MountChangeDetection" - # The feature gate is used to start the gRPC API service. The gRPC server - # starts at 9115 port by default. This feature is currently in Alpha state - # - --feature-gates="APIService" - # The feature gate is used to enable NDM, to create blockdevice resources - # for unused partitions on the OS disk - # - --feature-gates="UseOSDisk" - imagePullPolicy: IfNotPresent - securityContext: - privileged: true - volumeMounts: - - name: config - mountPath: /host/node-disk-manager.config - subPath: node-disk-manager.config - readOnly: true - - name: udev - mountPath: /run/udev - - name: procmount - mountPath: /host/proc - readOnly: true - - name: devmount - mountPath: /dev - - name: basepath - mountPath: /var/openebs/ndm - - name: sparsepath - mountPath: /var/openebs/sparse - env: - # namespace in which NDM is installed will be passed to NDM Daemonset - # as environment variable - - name: NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # pass hostname as env variable using downward API to the NDM container - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # specify the directory where the sparse files need to be created. - # if not specified, then sparse files will not be created. - - name: SPARSE_FILE_DIR - value: "/var/openebs/sparse" - # Size(bytes) of the sparse file to be created. - - name: SPARSE_FILE_SIZE - value: "10737418240" - # Specify the number of sparse files to be created - - name: SPARSE_FILE_COUNT - value: "1" - # Process name used for matching is limited to the 15 characters - # present in the pgrep output. - # So fullname can be used here with pgrep (cmd is < 15 chars). - livenessProbe: - exec: - command: - - pgrep - - "ndm" - initialDelaySeconds: 30 - periodSeconds: 60 - volumes: - - name: config - configMap: - name: openebs-ndm-config - - name: udev - hostPath: - path: /run/udev - type: Directory - # mount /proc (to access mount file of process 1 of host) inside container - # to read mount-point of disks and partitions - - name: procmount - hostPath: - path: /proc - type: Directory - - name: devmount - # the /dev directory is mounted so that we have access to the devices that - # are connected at runtime of the pod. - hostPath: - path: /dev - type: Directory - - name: basepath - hostPath: - path: /var/openebs/ndm - type: DirectoryOrCreate - - name: sparsepath - hostPath: - path: /var/openebs/sparse ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: openebs-ndm-operator - namespace: openebs - labels: - name: openebs-ndm-operator - openebs.io/component-name: ndm-operator - openebs.io/version: dev -spec: - selector: - matchLabels: - name: openebs-ndm-operator - openebs.io/component-name: ndm-operator - replicas: 1 - strategy: - type: Recreate - template: - metadata: - labels: - name: openebs-ndm-operator - openebs.io/component-name: ndm-operator - openebs.io/version: dev - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: node-disk-operator - image: openebs/node-disk-operator:ci - imagePullPolicy: IfNotPresent - env: - - name: WATCH_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - # the service account of the ndm-operator pod - - name: SERVICE_ACCOUNT - valueFrom: - fieldRef: - fieldPath: spec.serviceAccountName - - name: OPERATOR_NAME - value: "node-disk-operator" - - name: CLEANUP_JOB_IMAGE - value: "openebs/linux-utils:ci" - livenessProbe: - httpGet: - path: /healthz - port: 8585 - initialDelaySeconds: 15 - periodSeconds: 20 - readinessProbe: - httpGet: - path: /readyz - port: 8585 - initialDelaySeconds: 5 - periodSeconds: 10 ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: openebs-admission-server - namespace: openebs - labels: - app: admission-webhook - openebs.io/component-name: admission-webhook - openebs.io/version: dev -spec: - replicas: 1 - strategy: - type: Recreate - rollingUpdate: null - selector: - matchLabels: - app: admission-webhook - template: - metadata: - labels: - app: admission-webhook - openebs.io/component-name: admission-webhook - openebs.io/version: dev - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: admission-webhook - image: openebs/admission-server:ci - imagePullPolicy: IfNotPresent - args: - - -alsologtostderr - - -v=2 - - 2>&1 - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: ADMISSION_WEBHOOK_NAME - value: "openebs-admission-server" - - name: ADMISSION_WEBHOOK_FAILURE_POLICY - value: "Fail" - # Process name used for matching is limited to the 15 characters - # present in the pgrep output. - # So fullname can't be used here with pgrep (>15 chars).A regular expression - # Anchor `^` : matches any string that starts with `admission-serve` - # `.*`: matche any string that has `admission-serve` followed by zero or more char - # that matches the entire command name has to specified. - livenessProbe: - exec: - command: - - sh - - -c - - test `pgrep -c "^admission-serve.*"` = 1 - initialDelaySeconds: 30 - periodSeconds: 60 ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: openebs-localpv-provisioner - namespace: openebs - labels: - name: openebs-localpv-provisioner - openebs.io/component-name: openebs-localpv-provisioner - openebs.io/version: dev -spec: - selector: - matchLabels: - name: openebs-localpv-provisioner - openebs.io/component-name: openebs-localpv-provisioner - replicas: 1 - strategy: - type: Recreate - template: - metadata: - labels: - name: openebs-localpv-provisioner - openebs.io/component-name: openebs-localpv-provisioner - openebs.io/version: dev - spec: - serviceAccountName: openebs-maya-operator - containers: - - name: openebs-provisioner-hostpath - imagePullPolicy: IfNotPresent - image: openebs/provisioner-localpv:ci - env: - # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s - # based on this address. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_K8S_MASTER - # value: "http://10.128.0.12:8080" - # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s - # based on this config. This is ignored if empty. - # This is supported for openebs provisioner version 0.5.2 onwards - #- name: OPENEBS_IO_KUBE_CONFIG - # value: "/home/ubuntu/.kube/config" - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as - # environment variable - - name: OPENEBS_SERVICE_ACCOUNT - valueFrom: - fieldRef: - fieldPath: spec.serviceAccountName - - name: OPENEBS_IO_ENABLE_ANALYTICS - value: "false" - - name: OPENEBS_IO_INSTALLER_TYPE - value: "openebs-operator" - - name: OPENEBS_IO_HELPER_IMAGE - value: "openebs/linux-utils:2.3.0" - # LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default - # leader election is enabled. - #- name: LEADER_ELECTION_ENABLED - # value: "true" - # Process name used for matching is limited to the 15 characters - # present in the pgrep output. - # So fullname can't be used here with pgrep (>15 chars).A regular expression - # that matches the entire command name has to specified. - # Anchor `^` : matches any string that starts with `provisioner-loc` - # `.*`: matches any string that has `provisioner-loc` followed by zero or more char - livenessProbe: - exec: - command: - - sh - - -c - - test `pgrep -c "^provisioner-loc.*"` = 1 - initialDelaySeconds: 30 - periodSeconds: 60 ---- - diff --git a/k8s/openebs-pg-dashboard.json b/k8s/openebs-pg-dashboard.json deleted file mode 100644 index 6d6444793f..0000000000 --- a/k8s/openebs-pg-dashboard.json +++ /dev/null @@ -1,700 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_OPENEBS_PROMETHEUS", - "label": "OpenEBS Prometheus", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "5.2.0" - }, - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "5.0.0" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "5.0.0" - }, - { - "type": "panel", - "id": "singlestat", - "name": "Singlestat", - "version": "5.0.0" - }, - { - "type": "panel", - "id": "text", - "name": "Text", - "version": "5.0.0" - } - ], - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "limit": 100, - "name": "Annotations & Alerts", - "showIn": 0, - "type": "dashboard" - } - ] - }, - "editable": true, - "gnetId": null, - "graphTooltip": 0, - "id": null, - "iteration": 1532616818599, - "links": [ - { - "icon": "info", - "tags": [], - "targetBlank": true, - "title": "OpenEBS Docs", - "tooltip": "OpenEBS Documentation", - "type": "link", - "url": "http://docs.openebs.io/" - } - ], - "panels": [ - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(245, 54, 54, 0.9)", - "rgba(237, 129, 40, 0.89)", - "rgba(50, 172, 45, 0.97)" - ], - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "description": "Elapsed time since Volume was provisioned", - "format": "m", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 9, - "w": 6, - "x": 0, - "y": 0 - }, - "id": 10, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_volume_uptime{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}/60", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "refId": "A", - "step": 30 - } - ], - "thresholds": "", - "title": "Uptime", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "#3f6833", - "rgba(237, 81, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "description": "Capacity Used by the Volume", - "format": "decgbytes", - "gauge": { - "maxValue": 5, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": false - }, - "gridPos": { - "h": 9, - "w": 6, - "x": 6, - "y": 0 - }, - "id": 2, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": true, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "__name__", - "targets": [ - { - "expr": "openebs_size_of_volume{openebs_pv=~\"$OpenEBS\"}", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "", - "refId": "A", - "step": 30 - } - ], - "thresholds": "", - "title": "Capacity", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(245, 54, 54, 0.9)", - "rgba(237, 129, 40, 0.89)", - "rgba(50, 172, 45, 0.97)" - ], - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "format": "decgbytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 9, - "w": 6, - "x": 12, - "y": 0 - }, - "hideTimeOverride": true, - "id": 12, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": true - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_actual_used{openebs_pv=~\"$OpenEBS\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A", - "step": 15 - } - ], - "thresholds": "", - "timeFrom": "2h", - "timeShift": null, - "title": "Storage Usage", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "content": "\"OpenEBS\nOpenEBS\n\n

You're monitoring OpenEBS Volumes using Prometheus. For more information, check out the OpenEBS and Prometheus projects. If you would like to retain this volume monitoring data and much more, sign-up for MayaOnline

", - "gridPos": { - "h": 9, - "w": 6, - "x": 18, - "y": 0 - }, - "id": 11, - "links": [], - "mode": "html", - "title": "", - "transparent": true, - "type": "text" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "description": "IOPS", - "fill": 1, - "gridPos": { - "h": 7, - "w": 24, - "x": 0, - "y": 9 - }, - "id": 3, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": true, - "rightSide": true, - "show": true, - "sideWidth": 350, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "increase(openebs_reads{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/60", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Reads}}", - "refId": "A", - "step": 2 - }, - { - "expr": "increase(openebs_writes{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/60", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "{{Writes}}", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "IOPS", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "none", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "description": "Throughput", - "fill": 1, - "gridPos": { - "h": 7, - "w": 24, - "x": 0, - "y": 16 - }, - "id": 9, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": true, - "rightSide": true, - "show": true, - "sideWidth": 350, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "increase(openebs_read_block_count{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/(1024*1024*60)", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Read Throughput}}", - "refId": "A", - "step": 2 - }, - { - "expr": "increase(openebs_write_block_count{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/(1024*1024*60)", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Write Throughput}}", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Throughput", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "MBs", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "description": "Latency", - "fill": 1, - "gridPos": { - "h": 7, - "w": 24, - "x": 0, - "y": 23 - }, - "id": 5, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": true, - "rightSide": true, - "show": true, - "sideWidth": 350, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "((increase(openebs_read_time{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m]))/(increase(openebs_reads{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])))/1000000", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Read Latency}}", - "refId": "A", - "step": 2 - }, - { - "expr": "((increase(openebs_write_time{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m]))/(increase(openebs_writes{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])))/1000000", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Write Latency}}", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Latency", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "s", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "refresh": false, - "schemaVersion": 16, - "style": "dark", - "tags": [], - "templating": { - "list": [ - { - "allValue": null, - "current": {}, - "datasource": "${DS_OPENEBS_PROMETHEUS}", - "hide": 0, - "includeAll": false, - "label": "OpenEBS Volume", - "multi": false, - "name": "OpenEBS", - "options": [], - "query": "label_values(openebs_size_of_volume, openebs_pv)", - "refresh": 1, - "regex": "", - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-6h", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "", - "title": "OpenEBS Volume Stats", - "uid": "JOHe1vdiz", - "version": 2 -} diff --git a/k8s/openebs-pool-exporter.json b/k8s/openebs-pool-exporter.json deleted file mode 100644 index 8a3656bb42..0000000000 --- a/k8s/openebs-pool-exporter.json +++ /dev/null @@ -1,1120 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_OPENEBS-POOL-DASHBOARD", - "label": "openebs-pool-dashboard", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "5.2.0" - }, - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "5.0.0" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "5.0.0" - }, - { - "type": "panel", - "id": "singlestat", - "name": "Singlestat", - "version": "5.0.0" - } - ], - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": "-- Grafana --", - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "name": "Annotations & Alerts", - "type": "dashboard" - } - ] - }, - "editable": true, - "gnetId": null, - "graphTooltip": 0, - "id": null, - "iteration": 1552037373261, - "links": [], - "panels": [ - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Status of pool (0, 1, 2, 3, 4, 5, 6) = {\"Offline\", \"Online\", \"Degraded\", \"Faulted\", \"Removed\", \"Unavail\", \"NoPoolsAvailable\"}", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 0, - "y": 0 - }, - "id": 9, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_pool_status{pool=\"$Pool\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Pool Status", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "Offline", - "value": "0" - }, - { - "op": "=", - "text": "Online", - "value": "1" - }, - { - "op": "=", - "text": "Degraded", - "value": "2" - }, - { - "op": "=", - "text": "Faulted", - "value": "3" - }, - { - "op": "=", - "text": "Removed", - "value": "4" - }, - { - "op": "=", - "text": "Unavailable", - "value": "5" - }, - { - "op": "=", - "text": "No Pools Available", - "value": "6" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Status of rebuild on replica (0, 1, 2, 3, 4, 5, 6)= {\"INIT\", \"DONE\", \"SNAP REBUILD INPROGRESS\", \"ACTIVE DATASET REBUILD INPROGRESS\", \"ERRORED\", \"FAILED\", \"UNKNOWN\"}", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 8, - "w": 4, - "x": 4, - "y": 0 - }, - "id": 11, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_rebuild_status{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Replica Rebuilding Status", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "INIT", - "value": "0" - }, - { - "op": "=", - "text": "DONE", - "value": "1" - }, - { - "op": "=", - "text": "SNAP REBUILD IN PROGRESS", - "value": "2" - }, - { - "op": "=", - "text": "ACTIVE DATASET REBUILD IN PROGRESS", - "value": "3" - }, - { - "op": "=", - "text": "ERRORED", - "value": "4" - }, - { - "op": "=", - "text": "FAILED", - "value": "5" - }, - { - "op": "=", - "text": "UNKNOWN", - "value": "6" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Total no of rebuild performed", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 8, - "y": 0 - }, - "id": 19, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_total_rebuild_done{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Replica Total Rebuild Done Count", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Available size of pool from zfs list", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 8, - "w": 4, - "x": 12, - "y": 0 - }, - "id": 12, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_available_size{name=\"$Pool\"}/(1024*1024*1024)", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Available Pool Size", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Used size of pool from zfs list", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 8, - "w": 4, - "x": 16, - "y": 0 - }, - "id": 13, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_used_size{name=\"$Pool\"}/(1024*1024*1024)", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Used Pool Size", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Used size of pool by replica from zfs list", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 8, - "w": 4, - "x": 20, - "y": 0 - }, - "id": 14, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_used_size{name=\"$Pool/$Replica\"}/(1024*1024*1024)", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Used Pool Size by replica", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Status of replica (0, 1, 2, 3) = {\"Offline\", \"Healthy\", \"Degraded\", \"Rebuilding\"}", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 0, - "y": 4 - }, - "id": 10, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_replica_status{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Replica Status", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "Offline", - "value": "0" - }, - { - "op": "=", - "text": "Healthy", - "value": "1" - }, - { - "op": "=", - "text": "Degraded", - "value": "2" - }, - { - "op": "=", - "text": "Rebuilding", - "value": "3" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Total no of rebuild performed which are failed", - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 4, - "w": 4, - "x": 8, - "y": 4 - }, - "id": 20, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_total_failed_rebuild{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - } - ], - "thresholds": "", - "title": "Replica Total Rebuild Failed Count", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [], - "valueName": "current" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Read / Write / Sync IO's", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 8 - }, - "id": 16, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "openebs_total_write_count{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - }, - { - "expr": "openebs_total_read_count{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "B" - }, - { - "expr": "openebs_sync_count{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "C" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "IO's count", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Write latency on replica", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 8 - }, - "id": 17, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "increase(openebs_write_latency{pool=\"$Pool\", vol=\"$Replica\"}[2m])/increase(openebs_write_block_count{pool=\"$Pool\", vol=\"$Replica\"}[2m])", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - }, - { - "expr": "increase(openebs_read_latency{pool=\"$Pool\", vol=\"$Replica\"}[2m])/increase(openebs_read_block_count{pool=\"$Pool\", vol=\"$Replica\"}[2m])", - "format": "time_series", - "intervalFactor": 1, - "refId": "B" - }, - { - "expr": "increase(openebs_sync_latency{pool=\"$Pool\", vol=\"$Replica\"}[2m])/increase(openebs_sync_count{pool=\"$Pool\", vol=\"$Replica\"}[2m])", - "format": "time_series", - "intervalFactor": 1, - "refId": "C" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Latency", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "ns", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "description": "Read / Write / Sync io's in bytes", - "fill": 1, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 17 - }, - "id": 18, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "openebs_total_write_bytes{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A" - }, - { - "expr": "openebs_total_read_bytes{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "B" - }, - { - "expr": "openebs_rebuild_bytes{pool=\"$Pool\", vol=\"$Replica\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "C" - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "IO's in bytes", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "schemaVersion": 16, - "style": "dark", - "tags": [], - "templating": { - "list": [ - { - "allValue": null, - "current": {}, - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "hide": 0, - "includeAll": false, - "label": null, - "multi": false, - "name": "Pool", - "options": [], - "query": "label_values(openebs_pool_status, pool)", - "refresh": 1, - "regex": "", - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false - }, - { - "allValue": null, - "current": {}, - "datasource": "${DS_OPENEBS-POOL-DASHBOARD}", - "hide": 0, - "includeAll": false, - "label": null, - "multi": false, - "name": "Replica", - "options": [], - "query": "label_values(openebs_replica_status, vol)", - "refresh": 1, - "regex": "", - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-6h", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "", - "title": "Openebs Pool dashboard", - "uid": "JaXeyhjiz", - "version": 1 -} diff --git a/k8s/openebs-servicemonitor.yaml b/k8s/openebs-servicemonitor.yaml deleted file mode 100644 index 7b2d8eae99..0000000000 --- a/k8s/openebs-servicemonitor.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: monitoring.coreos.com/v1 -kind: ServiceMonitor -metadata: - labels: - app: openebs - name: openebs - namespace: openebs -spec: - endpoints: - - path: /metrics - port: exporter - namespaceSelector: - matchNames: - - openebs - selector: - matchLabels: - openebs.io/cas-type: cstor - monitoring: volume_exporter_prometheus diff --git a/k8s/openebs-storageclasses.yaml b/k8s/openebs-storageclasses.yaml deleted file mode 100644 index 999ce1de00..0000000000 --- a/k8s/openebs-storageclasses.yaml +++ /dev/null @@ -1,31 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-standard - annotations: - cas.openebs.io/config: | - - name: ReplicaCount - value: "3" -provisioner: openebs.io/provisioner-iscsi ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-standalone - annotations: - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-mongodb - annotations: - cas.openebs.io/config: | - - name: FSType - value: "xfs" -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/openebs-vol-stats-maya-exporter.json b/k8s/openebs-vol-stats-maya-exporter.json deleted file mode 100644 index fc11c2e60d..0000000000 --- a/k8s/openebs-vol-stats-maya-exporter.json +++ /dev/null @@ -1,700 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_OPENEBS", - "label": "openebs", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "5.2.0" - }, - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "5.0.0" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "5.0.0" - }, - { - "type": "panel", - "id": "singlestat", - "name": "Singlestat", - "version": "5.0.0" - }, - { - "type": "panel", - "id": "text", - "name": "Text", - "version": "5.0.0" - } - ], - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": "${DS_OPENEBS}", - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "limit": 100, - "name": "Annotations & Alerts", - "showIn": 0, - "type": "dashboard" - } - ] - }, - "editable": true, - "gnetId": null, - "graphTooltip": 0, - "id": null, - "iteration": 1542384304082, - "links": [ - { - "icon": "info", - "tags": [], - "targetBlank": true, - "title": "OpenEBS Docs", - "tooltip": "OpenEBS Documentation", - "type": "link", - "url": "http://docs.openebs.io/" - } - ], - "panels": [ - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(245, 54, 54, 0.9)", - "rgba(237, 129, 40, 0.89)", - "rgba(50, 172, 45, 0.97)" - ], - "datasource": "${DS_OPENEBS}", - "description": "Elapsed time since Volume was provisioned", - "format": "m", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 5, - "w": 6, - "x": 0, - "y": 0 - }, - "id": 10, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_volume_uptime{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}/60", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "refId": "A", - "step": 30 - } - ], - "thresholds": "", - "title": "Uptime", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "#3f6833", - "rgba(237, 81, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_OPENEBS}", - "description": "Capacity Used by the Volume", - "format": "decgbytes", - "gauge": { - "maxValue": 5, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": false - }, - "gridPos": { - "h": 5, - "w": 6, - "x": 6, - "y": 0 - }, - "id": 2, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": true, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "tableColumn": "__name__", - "targets": [ - { - "expr": "openebs_size_of_volume{openebs_pv=~\"$OpenEBS\"}", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "", - "refId": "A", - "step": 30 - } - ], - "thresholds": "", - "title": "Capacity", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(245, 54, 54, 0.9)", - "rgba(237, 129, 40, 0.89)", - "rgba(50, 172, 45, 0.97)" - ], - "datasource": "${DS_OPENEBS}", - "format": "decgbytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 5, - "w": 6, - "x": 12, - "y": 0 - }, - "hideTimeOverride": true, - "id": 12, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": true - }, - "tableColumn": "", - "targets": [ - { - "expr": "openebs_actual_used{openebs_pv=~\"$OpenEBS\"}", - "format": "time_series", - "intervalFactor": 1, - "refId": "A", - "step": 15 - } - ], - "thresholds": "", - "timeFrom": "2h", - "timeShift": null, - "title": "Storage Usage", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "content": "\"OpenEBS\nOpenEBS\n\n

You're monitoring OpenEBS Volumes using Prometheus. For more information, check out the OpenEBS and Prometheus projects. If you would like to retain this volume monitoring data and much more, sign-up for MayaOnline

", - "gridPos": { - "h": 5, - "w": 6, - "x": 18, - "y": 0 - }, - "id": 11, - "links": [], - "mode": "html", - "title": "", - "transparent": true, - "type": "text" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "description": "IOPS", - "fill": 1, - "gridPos": { - "h": 6, - "w": 24, - "x": 0, - "y": 5 - }, - "id": 3, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": true, - "rightSide": true, - "show": true, - "sideWidth": 350, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "increase(openebs_reads{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/60", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Reads}}", - "refId": "A", - "step": 2 - }, - { - "expr": "increase(openebs_writes{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/60", - "format": "time_series", - "intervalFactor": 2, - "legendFormat": "{{Writes}}", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "IOPS", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "none", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "description": "Throughput", - "fill": 1, - "gridPos": { - "h": 7, - "w": 24, - "x": 0, - "y": 11 - }, - "id": 9, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": true, - "rightSide": true, - "show": true, - "sideWidth": 350, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "irate(openebs_total_read_bytes{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/60", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Read Throughput}}", - "refId": "A", - "step": 2 - }, - { - "expr": "irate(openebs_total_write_bytes{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])/60", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Write Throughput}}", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Throughput", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "${DS_OPENEBS}", - "description": "Latency", - "fill": 1, - "gridPos": { - "h": 7, - "w": 24, - "x": 0, - "y": 18 - }, - "id": 5, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": true, - "rightSide": true, - "show": true, - "sideWidth": 350, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "((increase(openebs_read_time{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m]))/(increase(openebs_reads{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])))", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Read Latency}}", - "refId": "A", - "step": 2 - }, - { - "expr": "((increase(openebs_write_time{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m]))/(increase(openebs_writes{job=\"openebs-volumes\", openebs_pv=~\"$OpenEBS\"}[1m])))", - "format": "time_series", - "hide": false, - "intervalFactor": 2, - "legendFormat": "{{Write Latency}}", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Latency", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "ns", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "refresh": false, - "schemaVersion": 16, - "style": "dark", - "tags": [], - "templating": { - "list": [ - { - "allValue": null, - "current": {}, - "datasource": "${DS_OPENEBS}", - "hide": 0, - "includeAll": false, - "label": "OpenEBS Volume", - "multi": false, - "name": "OpenEBS", - "options": [], - "query": "label_values(openebs_size_of_volume, openebs_pv)", - "refresh": 1, - "regex": "", - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-30m", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "", - "title": "OpenEBS Volume Stats - OpenEBS Exporter v0.7.2", - "uid": "", - "version": 2 -} \ No newline at end of file diff --git a/k8s/openshift/byo/baremetal/README.md b/k8s/openshift/byo/baremetal/README.md deleted file mode 100644 index fd032bf1d3..0000000000 --- a/k8s/openshift/byo/baremetal/README.md +++ /dev/null @@ -1,341 +0,0 @@ -# STEPS TO RUN OPENEBS ON A MULTI-NODE CENTOS7 OPENSHIFT CLUSTER - -This tutorial provides detailed instructions on how to setup a multi-node BYO (Bring-Your-Own-Host) OpenShift cluster on CentOS7 and -run applications on it with OpenEBS storage. - -### PRE-REQUISITES - -- At least 2 or more CentOS 7 hosts (virtual-machines/baremetal/cloud instances) with 2 vCPUs, 4G RAM and 16GB Hard disk. - -- Ensure that the following package dependencies are installed on the hosts via *yum install*. - **Note:** *yum update* may be needed prior to this step. - - - python, wget, git, net-tools, bind-utils, iptables-services, bridge-utils, bash-completion, kexec-tools, sos, psacct, docker-1.12.6 - -- Ensure that the following python packages are installed on hosts via pip install. - **Note**: Python-pip can be installed via *easy_install pip* if not present already. - - - Ansible (>= 2.3) on the local machine or any one of the hosts (typically installed on the host used as openshift-master). - - pyYaml Python package on all the hosts. - -- Functional DNS server, with all hosts configured by appropriate domain names (Ensure *nslookup* of the hostnames is -successful in resolving the machine IP addresses). - -- Setup passwordless SSH between the Ansible host & other hosts. - -#### Notes: - -- System recommendations for production cluster can be found [here](https://docs.openshift.com/container-platform/3.11/install/prerequisites.html#hardware). -This document focuses on bringing up a setup for evaluation purposes. - -- Ensure that the Docker service is running. - -### OPENSHIFT INSTALL STEPS - -#### Step-1: Download the OpenShift Ansible Playbooks - -Clone the OpenShift Ansible repository of any stable release branch to your Ansible machine and change -into the directory. Use the same version of *openshift-ansible* and *openshift-origin* release for installation. - -In this example, we shall install Openshift Origin release v3.7. - -``` -git clone https://github.com/openshift/openshift-ansible.git -cd openshift-ansible -``` -#### Step-2: Prepare the Openshift Inventory file - -Create the ansible inventory file to install a simple openshift cluster with only master & nodes setup. The following inventory -template can be used. - -``` -cat openshift_inventory - -[OSEv3:children] -masters -nodes -etcd - -[OSEv3:vars] -# SSH user, this user should allow ssh based auth without requiring a password -ansible_ssh_user=root -ansible_ssh_port=22 -openshift_deployment_type=origin -deployment_type=origin -openshift_release=v3.7 -openshift_pkg_version=-3.7.0 -debug_level=2 -openshift_disable_check=disk_availability,memory_availability,docker_storage,docker_image_availability -openshift_master_default_subdomain=apps.cbqa.in -osm_default_node_selector='region=lab' - -openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}] - -[masters] -CentOS1.cbqa.in - -[etcd] -CentOS1.cbqa.in - -[nodes] -CentOS1.cbqa.in openshift_node_labels="{'region': 'infra', 'zone': 'baremetal'}" openshift_schedulable=true -CentOS2.cbqa.in openshift_node_labels="{'region': 'lab', 'zone': 'baremetal'}" openshift_schedulable=true -CentOS3.cbqa.in openshift_node_labels="{'region': 'lab', 'zone': 'baremetal'}" openshift_schedulable=true -CentOS4.cbqa.in openshift_node_labels="{'region': 'lab', 'zone': 'baremetal'}" openshift_schedulable=true -``` -Note: The Openshift deploy cluster playbook performs a health-check prior to execution of the install roles to verify system -readiness. Typically, the following pitfalls may be observed: - -- Memory_availability & storage_availability - - - Issue: Checks fail if we don't adhere to production standards. - - Workaround: Disable check by adding into openshift_disable_check inventory variable. - -- Docker image availability - - - Issue: Checks fail if there are DNS issues/flaky networks due to which the docker.io registry cannot. - be accessed. Sometimes, this fails even when a manual inspection show they are available and accessible to the machine. - - Workaround: If manual Skopeo inspect is successful, disable check by adding into openshift_disable_check inventory variable. - - Skopeo inspect example : ```skopeo inspect --tls-verify=false docker://docker.io/cockpit/kubernetes:latest``` - -- Docker storage availability - - - Issue: Can fail if the Docker service is not running. The daemon doesn't automatically run post yum install. - - Workaround: Restart Docker daemon. - -- Package availability & Package version - - - Issue: Openshift packages with desired versions (specified in the inventory) are not available for install with default - repository setup. - - Workaround: The Openshift Origin packages are released separately for CentOS. The repositories on these need to be added - into the hosts. - - The packages are available [here](http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/) and the GPG keys can be - downloaded from [here](https://github.com/CentOS-PaaS-SIG/centos-release-paas-common/blob/master/RPM-GPG-KEY-CentOS-SIG-PaaS) - - Following additions can be made to the existing CentOS repos (/etc/yum.repos.d/CentOS-Base.repo): - - ``` - #openshift - [openshift] - name=CentOS-OpenShift - baseurl=http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/ - gpgcheck=1 - enabled=1 - gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS - ``` - -#### Step-3: Run the Ansible Playbook job to Setup Openshift Cluster - -Once the inventory file is ready, run the deploy_cluster playbook to setup the openshift cluster. The setup can tak around -15-20 minutes depending on network speed and resources available. - -**Note:** The deploy_cluster playbook also includes playbooks to setup Glusterfs, monitoring, logging etc.., which are optional. -In this example, only the etcd, master, node and management setup playbooks were executed, with other playbook imports commented. - -``` -ansible-playbook -i openshift-ansible/openshift_inventory openshift-ansible/playbooks/deploy_cluster.yml -``` - -The playbook should complete without errors. The trailing output of the playbook run should look similar to the following: - -``` -PLAY RECAP ************************************************************************************************************* -CentOS1.cbqa.in : ok=404 changed=124 unreachable=0 failed=0 -CentOS2.cbqa.in : ok=144 changed=46 unreachable=0 failed=0 -CentOS3.cbqa.in : ok=144 changed=46 unreachable=0 failed=0 -CentOS4.cbqa.in : ok=144 changed=46 unreachable=0 failed=0 -localhost : ok=12 changed=0 unreachable=0 failed=0 - - -INSTALLER STATUS ******************************************************************************************************* -Initialization : Complete (0:00:43) -Health Check : Complete (0:00:11) -etcd Install : Complete (0:01:20) -Master Install : Complete (0:09:44) -Master Additional Install : Complete (0:00:48) -Node Install : Complete (0:06:28) -``` -Execute the following commands to verify successful installation. - -``` -oc get nodes - -NAME STATUS AGE VERSION -centos1.cbqa.in Ready 16h v1.7.6+a08f5eeb62 -centos2.cbqa.in Ready 16h v1.7.6+a08f5eeb62 -centos3.cbqa.in Ready 16h v1.7.6+a08f5eeb62 -centos4.cbqa.in Ready 16h v1.7.6+a08f5eeb62 -``` - -#### Step-4: Initial setup - -- Run the following command to create a new admin user with cluster-admin role/permissions which can be used to run the OpenEBS -operator and deploy applications. - -``` -oc adm policy add-cluster-role-to-user cluster-admin admin --as=system:admin -``` - -- Assign password to the admin user. - -``` -htpasswd /etc/origin/htpasswd admin -``` - -- Login as admin user & use the "default" project (admin is logged into this project by default). - -``` -oc login -u admin -``` - -- Provide access to the host-volumes (which are needed by the OpenEBS volume replicas) by updating the default security context (scc). - -``` -oc edit scc restricted -``` - -Add ```allowHostDirVolumePlugin: true``` and save changes. - -Alternatively, the following command may be used: - -``` - oc adm policy add-scc-to-user hostaccess admin --as:system:admin -``` - -- Allow the containers in the project to run as root. - -``` -oc adm policy add-scc-to-user anyuid -z default --as=system:admin -``` - -Note: While the above procedures may be sufficient to enable host access to the containers, it may also be needed to : -- Disable selinux (via ```setenforce 0```) to ensure the same. -- Edit the restricted scc to use ```runAsUser: type: RunAsAny``` (the replica pod runs with root user) - -#### Step-5: Setup OpenEBS Control Plane - -- Download the latest OpenEBS operator files and sample application specifications on OpenShift-Master machine. - -``` -git clone https://github.com/openebs/openebs.git -cd openebs/k8s -``` - -- Apply the openebs-operator on the openshift cluster. - -``` -oc apply -f openebs-operator -oc apply -f openebs-storageclasses.yaml -``` - -- Verify that the openebs operator services are created successfully and deployments are running. Also check whether the -storageclasses are created successfully. - -``` -oc get deployments - -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -maya-apiserver 1 1 1 1 13h -openebs-provisioner 1 1 1 1 13h -``` - -``` -oc get pods - -NAME READY STATUS RESTARTS AGE -maya-apiserver-3053842955-wdxdl 1/1 Running 0 13h -openebs-provisioner-2499455298-n8lgc 1/1 Running 0 13h -``` - -``` -oc get svc - -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 17h -maya-apiserver-service 172.30.168.61 5656/TCP 13h -``` - -``` -oc get sa - -NAME SECRETS AGE -builder 2 17h -default 2 17h -deployer 2 17h -openebs-maya-operator 2 13h -``` - -``` -oc get clusterrole openebs-maya-operator - -NAME -openebs-maya-operator -``` - -``` -oc get clusterrolebindings openebs-maya-operator - -NAME ROLE USERS GROUPS SERVICE ACCOUNTS SUBJECTS -openebs-maya-operator /openebs-maya-operator default/openebs-maya-operator, default/default -``` - -``` -oc get sc - -NAME TYPE -openebs-cassandra openebs.io/provisioner-iscsi -openebs-es-data-sc openebs.io/provisioner-iscsi -openebs-jupyter openebs.io/provisioner-iscsi -openebs-kafka openebs.io/provisioner-iscsi -openebs-mongodb openebs.io/provisioner-iscsi -openebs-percona openebs.io/provisioner-iscsi -openebs-redis openebs.io/provisioner-iscsi -openebs-standalone openebs.io/provisioner-iscsi -openebs-standard openebs.io/provisioner-iscsi -openebs-zk openebs.io/provisioner-iscsi -``` - -#### Step-6: Deploy a sample application with OpenEBS storage - -- Use OpenEBS as persistent storage for a percona deployment by selecting the openebs-percona storageclass in the persistent -volume claim. A sample is available in the openebs git repo (which was cloned in the previous steps). - -Apply this percona deployment yaml. - -``` -cd demo/percona -oc apply -f demo-percona-mysql-pvc.yaml -``` - -- Verify that the deployment runs successfully. - -``` -oc get pods - -NAME READY STATUS RESTARTS AGE -maya-apiserver-3053842955-wdxdl 1/1 Running 0 13h -openebs-provisioner-2499455298-n8lgc 1/1 Running 0 13h -percona-1378140207-5q2gb 1/1 Running 0 11h -pvc-de965f7d-f301-11e7-a6ce-000c29a47920-ctrl-2226696718-sh8cc 2/2 Running 0 11h -pvc-de965f7d-f301-11e7-a6ce-000c29a47920-rep-4109589824-5zf7t 1/1 Running 0 11h -``` - -#### Step-7: Manage cluster from OpenShift management console - -- Login to the OpenShift management console at https://:8443 as "admin" user. Navigate -on the left pane to view different consoles and manage the cluster resources. - -![openshift](https://github.com/ksatchit/elves/blob/master/openshift/baremetal/images/openshift.jpg) - - - - - - - - - - diff --git a/k8s/openshift/byo/baremetal/containerized_openshift_readme.md b/k8s/openshift/byo/baremetal/containerized_openshift_readme.md deleted file mode 100644 index 6cf5ca9f80..0000000000 --- a/k8s/openshift/byo/baremetal/containerized_openshift_readme.md +++ /dev/null @@ -1,473 +0,0 @@ -## Procedure to run OpenEBS on Multi-Node Cotainerized OpenShift Cluster - -This tutorial provides detailed instructions on how to setup a multi-node BYO (Bring-Your-Own-Host) OpenShift Containerized cluster on -RHEL 7.5 and run applications on it with OpenEBS storage. - -### Prerequisites - --At least 2 or more RHEL 7.5 hosts (virtual-machines/baremetal/cloud instances) with 3 vCPUs, 16GB RAM and 60GB hard disk. --A valid Red Hat subscription - -### Attach OpenShift Container Platform Subscription - -1. As root on the target machines (both master and node), use subscription-manager to register the systems with Red Hat. - -``` -$ subscription-manager register -``` - -2. Pull the latest subscription data from RHSM: - -``` -$ subscription-manager refresh -``` - -3. List the available subscriptions. - -``` -$ subscription-manager list --available -``` - -4. Find the pool ID that provides OpenShift Container Platform subscription and attach it. - -``` -$ subscription-manager attach --pool= -``` - -5. Replace the string with the pool ID of the pool that provides OpenShift Container Platform. The pool ID is a long alphanumeric string. - -These RHEL systems are now authorized to install OpenShift Container Platform. Now you need to tell the systems from where to get -OpenShift Container Platform. - -### Set Up Repositories -On both master and node, use subscription-manager to enable the repositories that are necessary in order to install OpenShift Container -Platform. You may have already enabled the first two repositories in this example. - -``` -$ subscription-manager repos --enable="rhel-7-server-rpms" \ - --enable="rhel-7-server-extras-rpms" \ - --enable="rhel-7-server-ose-3.9-rpms" \ - --enable="rhel-7-fast-datapath-rpms" \ - --enable="rhel-7-server-ansible-2.4-rpms" -``` - -This command tells your RHEL system that the tools required to install OpenShift Container Platform will be available from these -repositories. Now we need the OpenShift Container Platform installer that is based on Ansible. - - -### Install the OpenShift Container Platform Package -The installer for OpenShift Container Platform is provided by the atomic-openshift-utils package. Install it using yum on both the -master and the node, after running yum update. - -``` -$ yum -y install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct -$ yum -y update -$ yum -y install atomic-openshift-utils -$ yum -y install docker -``` - --Functional DNS server, with all hosts configured by appropriate domain names (Ensure nslookup of the hostnames is successful in - resolving the machine's IP addresses). The detailed steps can be found by going to the following link.. - (https://medium.com/@fromprasath/adding-centos-to-windows-domain-298977008f6c) - - ``` - [root@OSNode1 ~]# nslookup OSNode2 -Server: 20.10.21.21 -Address: 20.10.21.21#53 - -Name: OSNode2.cbqa.in -Address: 20.10.31.7 - -[root@OSNode2 ~]# nslookup OSNode1 -Server: 20.10.21.21 -Address: 20.10.21.21#53 - -Name: OSNode1.cbqa.in -Address: 20.10.31.6 -``` - - -Setup passwordless SSH between the Master and other Nodes. - - ### Run the Installer - - ``` - $ atomic-openshift-installer install - ``` - - This is an interactive installation process that guides you through the various steps. In most cases, you may want the default options. When it - starts, select the option for OpenShift Container Platform. You are installing one master and one node. - - - **Note**:The Openshift deploy cluster playbook performs a health-check prior to executing the install roles to verify system -readiness. Typically, the following pitfalls may be observed: - -- Memory_availability and storage_availability - - - Issue: Checks fail if we do not adhere to production standards. - - Workaround: Disable check by adding into openshift_disable_check inventory variable. - -- Docker image availability - - - Issue: Checks fail if there are DNS issues/flaky networks due to which the docker.io registry cannot - be accessed. Sometimes, this fails even when a manual inspection shows that they are available and accessible to the machine. - - Workaround: If manual Skopeo inspect is successful, disable check by adding into openshift_disable_check inventory variable. - - Skopeo inspect example : ```skopeo inspect --tls-verify=false docker://docker.io/cockpit/kubernetes:latest``` - -- Docker storage availability - - - Issue: Can fail if the Docker service is not running. The daemon does not automatically run post yum install. - - Workaround: Restart the Docker. - -- Docker_image_availability - -If the above pitfall is observerd during containerzied OpenShift installation, you must copy the hosts from -/root/.config/openshift/hosts and paste the same in /etc/ansible/hosts and add the Openshift_disable_check in the hosts file. - -``` -[root@osnode1 ansible]# cat hosts - -[OSEv3:children] -nodes -nfs -masters -etcd - -[OSEv3:vars] -openshift_master_cluster_public_hostname=None -ansible_ssh_user=root -openshift_master_cluster_hostname=None -openshift_hostname_check=false -deployment_type=openshift-enterprise -openshift_disable_check=disk_availability,memory_availability,docker_storage,docker_image_availability - -[nodes] -20.10.45.111 openshift_public_ip=20.10.45.111 openshift_ip=20.10.45.111 openshift_public_hostname=osnode1.mdataqa.in openshift_hostname=osnode1.mdataqa.in containerized=True connect_to=20.10.45.111 openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True ansible_connection=local -20.10.45.112 openshift_public_ip=20.10.45.112 openshift_ip=20.10.45.112 openshift_public_hostname=osnode2.mdataqa.in openshift_hostname=osnode2.mdataqa.in containerized=True connect_to=20.10.45.112 openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True - -[nfs] -20.10.45.111 openshift_public_ip=20.10.45.111 openshift_ip=20.10.45.111 openshift_public_hostname=osnode1.mdataqa.in openshift_hostname=osnode1.mdataqa.in containerized=True connect_to=20.10.45.111 ansible_connection=local - -[masters] -20.10.45.111 openshift_public_ip=20.10.45.111 openshift_ip=20.10.45.111 openshift_public_hostname=osnode1.mdataqa.in openshift_hostname=osnode1.mdataqa.in containerized=True connect_to=20.10.45.111 ansible_connection=local - -[etcd] -20.10.45.111 openshift_public_ip=20.10.45.111 openshift_ip=20.10.45.111 openshift_public_hostname=osnode1.mdataqa.in openshift_hostname=osnode1.mdataqa.in containerized=True connect_to=20.10.45.111 ansible_connection=local -``` - --While installing, if you get following error, use docker pull command to download the package on both master and nodes. - -``` - - 1. Hosts: 20.10.45.111 - Play: OpenShift Health Checks - Task: Run health checks (install) - EL - Message: One or more checks failed - Details: check "docker_image_availability": - One or more required container images are not available: - openshift3/node:v3.9.33, - openshift3/openvswitch:v3.9.33, - openshift3/ose-deployer:v3.9.33, - openshift3/ose-docker-registry:v3.9.33, - openshift3/ose-haproxy-router:v3.9.33, - openshift3/ose-pod:v3.9.33, - openshift3/ose:v3.9.33, - openshift3/registry-console:v3.9, - registry.access.redhat.com/rhel7/etcd - Checked with: skopeo inspect [--tls-verify=false] [--creds=:] docker:/// - Default registries searched: registry.access.redhat.com - Failed connecting to: registry.access.redhat.com - - - 2. Hosts: 20.10.45.112 - Play: OpenShift Health Checks - Task: Run health checks (install) - EL - Message: One or more checks failed - Details: check "docker_image_availability": - One or more required container images are not available: - openshift3/node:v3.9.33, - openshift3/openvswitch:v3.9.33, - openshift3/ose-deployer:v3.9.33, - openshift3/ose-docker-registry:v3.9.33, - openshift3/ose-haproxy-router:v3.9.33, - openshift3/ose-pod:v3.9.33, - openshift3/registry-console:v3.9 - Checked with: skopeo inspect [--tls-verify=false] [--creds=:] docker:/// - Default registries searched: registry.access.redhat.com - Failed connecting to: registry.access.redhat.com -``` - -The following command can be used to download the required package (master and node). - -``` -docker pull openshift3/node:v3.9.33 -``` - --While installing, if you get the error "Currently, NetworkManager must be installed and enabled prior to installation", you must follow the steps mentioned below to make it active (master and node). - -``` -[root@ocp-node-3 ~]# systemctl show NetworkManager | grep ActiveState -ActiveState=inactive -$systemctl enable NetworkManager; systemctl start NetworkManager -``` - -After installing, you will see the following output. - -``` - PLAY RECAP ********************************************************************* - 20.10.45.111 : ok=383 changed=142 unreachable=0 failed=0 - 20.10.45.112 : ok=61 changed=13 unreachable=0 failed=0 - localhost : ok=14 changed=0 unreachable=0 failed=0 - INSTALLER STATUS ***************************************************************************************************************************************************** - Initialization : Complete (0:00:58) - Health Check : Complete (0:05:35) - etcd Install : Complete (-1 day, 22:57:25) - NFS Install : Complete (0:00:48) - Master Install : Complete (0:06:25) - Master Additional Install : Complete (0:01:01) - Node Install : complete (0:03:20) -``` - --Installation takes approximately 10-15 minutes. - -### Start OpenShift Container Platform -After successful installation, use the following command to start OpenShift Container Platform. - -``` -systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers -``` - --Before you do anything more, log in at least one time with the default system:admin user and on the master run the following command. - -``` -$ oc login -u system:admin -``` - --Run the following command to verify that OpenShift Container Platform was installed and started successfully. -``` -$ oc get nodes -``` - -### Change Log In Identity Provider -The default behavior of a freshly installed OpenShift Container Platform instance is to deny any user from logging in. To change the -authentication method to HTPasswd: - -1. Open the /etc/origin/master/master-config.yaml file in edit mode. -2. Find the identityProviders section. -3. Change DenyAllPasswordIdentityProvider to HTPasswdPasswordIdentityProvider. -4. Change the value of the name label to htpasswd_auth and add a new line file: /etc/origin/openshift-passwd in the provider section. - -An example identityProviders section with HTPasswdPasswordIdentityProvider will look like the following. - -``` -oauthConfig: - ... - identityProviders: - - challenge: true - login: true - name: htpasswd_auth provider - provider: - apiVersion: v1 - kind: HTPasswdPasswordIdentityProvider - file: /etc/origin/openshift-passwd -``` - -5. Save the file. - -### Create User Accounts -1. You can use the httpd-tools package to obtain the htpasswd binary that can generate these accounts. - -``` -# yum -y install httpd-tools -``` - -2. Create a user account. - -``` -# touch /etc/origin/openshift-passwd -# htpasswd -b /etc/origin/openshift-passwd admin redhat -``` -You have created a user with admin role and password as redhat. - -3. Restart OpenShift before going forward. - -``` -# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers -``` - -4. Give this user account cluster-admin privileges, which allows it to do everything. - -``` -oc adm policy add-cluster-role-to-user cluster-admin admin --as=system:admin -``` - -5. You can use this username/password combination to log in via the web console or the command line. To test this, run the following command. - -``` -$ oc login -u admin -``` - -6. Provide access to the host-volumes (which are needed by the OpenEBS volume replicas) by updating the default security context (scc). - -``` -oc edit scc restricted -``` - --Add ```allowHostDirVolumePlugin: true```, ```runAsUser: type: RunAsAny``` and save changes. - -7. Allow the containers in the project to run as root. - -``` -oc adm policy add-scc-to-user anyuid -z default --as=system:admin -``` -**Note**: While the above procedures may be sufficient to enable host access to the containers, you may also need to disable selinux (via setenforce 0) to ensure the same. - - -### Setup OpenEBS Control Plane --Download the latest OpenEBS operator files and sample application specifications on OpenShift-Master machine. - -``` -git clone https://github.com/openebs/openebs.git -cd openebs/k8s -``` --Apply the openebs-operator on the openshift cluster. - -``` -oc apply -f openebs-operator -oc apply -f openebs-storageclasses.yaml -``` - --After applying the operator yaml, if you see pod status is in pending state and on describing the maya-apiserver pod the following error message is found. - -``` -[root@osnode1 ~]# oc get pods -n openebs -NAME READY STATUS RESTARTS AGE -maya-apiserver-6f48fc5449-7kxfz 0/1 Pending 0 3h -openebs-provisioner-7bf6fd7c8f-ph84w 0/1 Pending 0 3h -openebs-snapshot-operator-8bd769dc7-8c47x 0/2 Pending 0 3h - - - -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Warning FailedScheduling 1m (x772 over 3h) default-scheduler 0/2 nodes are available: 2 MatchNodeSelector. -``` - -- You need to have Node-Selectors label set for the nodes that you want to use for compute. i.e. oc edit node osnode1.mdataqa.in and - insert this label node-role.kubernetes.io/compute: "true". That will schedule your pods. - -``` - labels: - beta.kubernetes.io/arch: amd64 - beta.kubernetes.io/os: linux - kubernetes.io/hostname: osnode1.mdataqa.in - node-role.kubernetes.io/compute: "true" - node-role.kubernetes.io/master: "true" -``` - --Reapply the Openebs-operator and openebs-storageclass yaml. - -``` -[root@osnode1 prabhat]# oc get deployments -n openebs -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -maya-apiserver 1 1 1 1 3d -openebs-provisioner 1 1 1 1 3d -openebs-snapshot-operator 1 1 1 1 3d -``` - -``` -[root@osnode1 prabhat]# oc get pods -n openebs -NAME READY STATUS RESTARTS AGE -maya-apiserver-dc8f6bf4d-75bd4 1/1 Running 0 3d -openebs-provisioner-7b975bcd56-whjxq 1/1 Running 0 3d -openebs-snapshot-operator-7f96fc56-8xcw8 2/2 Running 0 3d -``` - -``` -[root@osnode1 prabhat]# oc get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kubernetes ClusterIP 172.30.0.1 443/TCP,53/UDP,53/TCP 4d -pvc-1f85ecd4-90e4-11e8-bbab-000c29d8ed2b-ctrl-svc ClusterIP 172.30.236.57 3260/TCP,9501/TCP 3d -[root@osnode1 prabhat]# oc get svc -n openebs -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -maya-apiserver-service ClusterIP 172.30.242.47 5656/TCP 3d -``` - -``` -[root@osnode1 prabhat]# oc get sa -n openebs -NAME SECRETS AGE -builder 2 3d -default 2 3d -deployer 2 3d -openebs-maya-operator 2 3d -``` - -``` -[root@osnode1 prabhat]# oc get clusterrole openebs-maya-operator -NAME -openebs-maya-operator -``` - -``` -[root@osnode1 prabhat]# oc get clusterrolebindings openebs-maya-operator -NAME ROLE USERS GROUPS SERVICE ACCOUNTS SUBJECTS -openebs-maya-operator /openebs-maya-operator openebs/openebs-maya-operator, default/default -``` - -``` -[root@osnode1 prabhat]# oc get sc -NAME PROVISIONER AGE -openebs-cassandra openebs.io/provisioner-iscsi 3d -openebs-es-data-sc openebs.io/provisioner-iscsi 3d -openebs-jupyter openebs.io/provisioner-iscsi 3d -openebs-kafka openebs.io/provisioner-iscsi 3d -openebs-mongodb openebs.io/provisioner-iscsi 3d -openebs-percona openebs.io/provisioner-iscsi 3d -openebs-redis openebs.io/provisioner-iscsi 3d -openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 3d -openebs-standalone openebs.io/provisioner-iscsi 3d -openebs-standard openebs.io/provisioner-iscsi 3d -openebs-zk openebs.io/provisioner-iscsi 3d -``` - -#### Deploy a sample application with OpenEBS storage - -- Use OpenEBS as persistent storage for a Percona deployment by selecting the openebs-Percona storageclass in the persistent -volume claim. - -Apply this Percona deployment yaml. - -``` -cd demo/percona -oc apply -f demo-percona-mysql-pvc.yaml -``` - -``` -[root@osnode1 prabhat]# oc get pods -NAME READY STATUS RESTARTS AGE -percona 1/1 Running 0 3d -pvc-1f85ecd4-90e4-11e8-bbab-000c29d8ed2b-ctrl-7bc6dbf48b-hvf5r 2/2 Running 0 3d -pvc-1f85ecd4-90e4-11e8-bbab-000c29d8ed2b-rep-85bf9bb55b-jc7np 1/1 Running 0 3d -pvc-1f85ecd4-90e4-11e8-bbab-000c29d8ed2b-rep-85bf9bb55b-t4jrt 1/1 Running 0 3d -``` - -The link of the documentation [openshift](https://access.redhat.com/documentation/en-us/openshift_container_platform/3.9/html-single/getting_started/#developers-console-before-you-begin) - - - - - - - - - - - - - - - - - - diff --git a/k8s/openshift/byo/baremetal/images/openshift.jpg b/k8s/openshift/byo/baremetal/images/openshift.jpg deleted file mode 100644 index 928d5265a5..0000000000 Binary files a/k8s/openshift/byo/baremetal/images/openshift.jpg and /dev/null differ diff --git a/k8s/openshift/byo/vagrant/README.md b/k8s/openshift/byo/vagrant/README.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/k8s/openshift/examples/README.md b/k8s/openshift/examples/README.md deleted file mode 100644 index 19f19b444e..0000000000 --- a/k8s/openshift/examples/README.md +++ /dev/null @@ -1,14 +0,0 @@ -OpenShift Examples -================ - -You can install example db-templates by copying examples from this module to your -first master and importing them with oc create -n into the openshift namespace or -using "Import YAML / JSON" button from the web console. - -Templates may require specific versions of OpenShift so they've been namespaced. -At this time, once a new version of Origin is released -the older versions will only receive new content by specific request. - -Please file an issue at https://github.com/openebs/openebs you'd -like to see older content updated and have tested to ensure it's backwards -compatible. diff --git a/k8s/openshift/examples/latest b/k8s/openshift/examples/latest deleted file mode 100644 index 8cad94b63e..0000000000 --- a/k8s/openshift/examples/latest +++ /dev/null @@ -1 +0,0 @@ -v3.9 \ No newline at end of file diff --git a/k8s/openshift/examples/v3.6/db-templates/README.md b/k8s/openshift/examples/v3.6/db-templates/README.md deleted file mode 100644 index 87438cf14b..0000000000 --- a/k8s/openshift/examples/v3.6/db-templates/README.md +++ /dev/null @@ -1,53 +0,0 @@ -OpenShift 3 Database Examples -============================= - -This directory contains example JSON templates to deploy databases in OpenShift -on OpenEBS volumes. They can be used to immediately instantiate a database and -expose it as a service in the current project, or to add a template that can be -later used from the Web Console or the CLI. - -The examples can also be tweaked to create new templates. - - -## Usage - -### Instantiating a new database service - -Use these instructions if you want to quickly deploy a new database service in -your current project. Instantiate a new database service with this command: - - $ oc new-app /path/to/template.json - -Replace `/path/to/template.json` with an appropriate path, that can be either a -local path or an URL. Example: - - $ oc new-app https://raw.githubusercontent.com/openebs/openebs/master/k8s/openshift/examples/db-templates/openebs-mongodb-persistent-template.json - -The parameters listed in the output above can be tweaked by specifying values in -the command line with the `-p` option: - - $ oc new-app examples/db-templates/openebs-mongodb-persistent-template.json -p DATABASE_SERVICE_NAME=mydb -p MONGODB_USER=default -p STORAGE_CLASS_NAME=openebs-default - -### Deleting a new database service - -Use these instructions when you need to completely delete the app and persistent -volumes in your current project. Separately delete your database service and -persistentvolumeclaim using the commands below: - - $ oc delete all -l app=openebs-mongodb-persistent - -Use "oc get pvc" command to find your persistentvolumeclaim name: - - $ oc get pvc - -Delete the pvc using the correct name from the output of above command: - - $ oc delete pvc mongodb - $ oc delete secret mongodb - -## More information - -The usage of each supported database image is further documented in the links -below: - -- [MongoDB](https://docs.openshift.org/latest/using_images/db_images/mongodb.html) diff --git a/k8s/openshift/examples/v3.6/db-templates/openebs-mongodb-persistent-template.json b/k8s/openshift/examples/v3.6/db-templates/openebs-mongodb-persistent-template.json deleted file mode 100644 index ee775a367f..0000000000 --- a/k8s/openshift/examples/v3.6/db-templates/openebs-mongodb-persistent-template.json +++ /dev/null @@ -1,306 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-mongodb-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS MongoDB (Persistent)", - "description": "MongoDB database service, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.\n\nNOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-mongodb", - "tags": "database,mongodb", - "template.openshift.io/long-description": "This template provides a standalone MongoDB server with a database created. The database is stored on persistent storage provided by OpenEBS. The database name, username, and password are chosen via parameters when provisioning this service.", - "template.openshift.io/provider-display-name": "Red Hat, Inc.", - "template.openshift.io/documentation-url": "https://docs.openshift.org/latest/using_images/db_images/mongodb.html", - "template.openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Username: ${MONGODB_USER}\n Password: ${MONGODB_PASSWORD}\n Database Name: ${MONGODB_DATABASE}\n Connection URL: mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@${DATABASE_SERVICE_NAME}/${MONGODB_DATABASE}\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.", - "labels": { - "template": "mongodb-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-username": "{.data['database-user']}", - "template.openshift.io/expose-password": "{.data['database-password']}", - "template.openshift.io/expose-admin_password": "{.data['database-admin-password']}", - "template.openshift.io/expose-database_name": "{.data['database-name']}" - } - }, - "stringData" : { - "database-user" : "${MONGODB_USER}", - "database-password" : "${MONGODB_PASSWORD}", - "database-admin-password" : "${MONGODB_ADMIN_PASSWORD}", - "database-name" : "${MONGODB_DATABASE}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "mongodb://{.spec.clusterIP}:{.spec.ports[?(.name==\"mongo\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "mongo", - "protocol": "TCP", - "port": 27017, - "targetPort": 27017, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "mongodb" - ], - "from": { - "kind": "ImageStreamTag", - "name": "mongodb:${MONGODB_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "mongodb", - "image": " ", - "ports": [ - { - "containerPort": 27017, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 3, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD --eval=\"quit()\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 27017 - } - }, - "env": [ - { - "name": "MONGODB_USER", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-user" - } - } - }, - { - "name": "MONGODB_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-admin-password" - } - } - }, - { - "name": "MONGODB_DATABASE", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-name" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/mongodb/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "mongodb", - "required": true - }, - { - "name": "MONGODB_USER", - "displayName": "MongoDB Connection Username", - "description": "Username for MongoDB user that will be used for accessing the database.", - "generate": "expression", - "from": "user[A-Z0-9]{3}", - "required": true - }, - { - "name": "MONGODB_PASSWORD", - "displayName": "MongoDB Connection Password", - "description": "Password for the MongoDB connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "MONGODB_DATABASE", - "displayName": "MongoDB Database Name", - "description": "Name of the MongoDB database accessed.", - "value": "sampledb", - "required": true - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "displayName": "MongoDB Admin Password", - "description": "Password for the database admin user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "MONGODB_VERSION", - "displayName": "Version of MongoDB Image", - "description": "Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/examples/v3.6/db-templates/openebs-redis-persistent-template.json b/k8s/openshift/examples/v3.6/db-templates/openebs-redis-persistent-template.json deleted file mode 100644 index a936e9a622..0000000000 --- a/k8s/openshift/examples/v3.6/db-templates/openebs-redis-persistent-template.json +++ /dev/null @@ -1,250 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-redis-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS Redis (Persistent)", - "description": "Redis in-memory data structure store, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.\n\nNOTE: You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-redis", - "tags": "database,redis", - "template.openshift.io/long-description": "This template provides a standalone Redis server. The data is stored on persistent storage provided by OpenEBS.", - "template.openshift.io/provider-display-name": "Red Hat, Inc.", - "template.openshift.io/documentation-url": "https://github.com/sclorg/redis-container/tree/master/3.2", - "template.openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Password: ${REDIS_PASSWORD}\n Connection URL: redis://${DATABASE_SERVICE_NAME}:6379/\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.", - "labels": { - "template": "redis-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-password": "{.data['database-password']}" - } - }, - "stringData" : { - "database-password" : "${REDIS_PASSWORD}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "redis://{.spec.clusterIP}:{.spec.ports[?(.name==\"redis\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "redis", - "protocol": "TCP", - "port": 6379, - "targetPort": 6379, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "redis" - ], - "from": { - "kind": "ImageStreamTag", - "name": "redis:${REDIS_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "redis", - "image": " ", - "ports": [ - { - "containerPort": 6379, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 5, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "test \"$(redis-cli -h 127.0.0.1 -a $REDIS_PASSWORD ping)\" == \"PONG\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 6379 - } - }, - "env": [ - { - "name": "REDIS_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/redis/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "redis", - "required": true - }, - { - "name": "REDIS_PASSWORD", - "displayName": "Redis Connection Password", - "description": "Password for the Redis connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "REDIS_VERSION", - "displayName": "Version of Redis Image", - "description": "Version of Redis image to be used (3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/examples/v3.7/db-templates/README.md b/k8s/openshift/examples/v3.7/db-templates/README.md deleted file mode 100644 index 87438cf14b..0000000000 --- a/k8s/openshift/examples/v3.7/db-templates/README.md +++ /dev/null @@ -1,53 +0,0 @@ -OpenShift 3 Database Examples -============================= - -This directory contains example JSON templates to deploy databases in OpenShift -on OpenEBS volumes. They can be used to immediately instantiate a database and -expose it as a service in the current project, or to add a template that can be -later used from the Web Console or the CLI. - -The examples can also be tweaked to create new templates. - - -## Usage - -### Instantiating a new database service - -Use these instructions if you want to quickly deploy a new database service in -your current project. Instantiate a new database service with this command: - - $ oc new-app /path/to/template.json - -Replace `/path/to/template.json` with an appropriate path, that can be either a -local path or an URL. Example: - - $ oc new-app https://raw.githubusercontent.com/openebs/openebs/master/k8s/openshift/examples/db-templates/openebs-mongodb-persistent-template.json - -The parameters listed in the output above can be tweaked by specifying values in -the command line with the `-p` option: - - $ oc new-app examples/db-templates/openebs-mongodb-persistent-template.json -p DATABASE_SERVICE_NAME=mydb -p MONGODB_USER=default -p STORAGE_CLASS_NAME=openebs-default - -### Deleting a new database service - -Use these instructions when you need to completely delete the app and persistent -volumes in your current project. Separately delete your database service and -persistentvolumeclaim using the commands below: - - $ oc delete all -l app=openebs-mongodb-persistent - -Use "oc get pvc" command to find your persistentvolumeclaim name: - - $ oc get pvc - -Delete the pvc using the correct name from the output of above command: - - $ oc delete pvc mongodb - $ oc delete secret mongodb - -## More information - -The usage of each supported database image is further documented in the links -below: - -- [MongoDB](https://docs.openshift.org/latest/using_images/db_images/mongodb.html) diff --git a/k8s/openshift/examples/v3.7/db-templates/openebs-mongodb-persistent-template.json b/k8s/openshift/examples/v3.7/db-templates/openebs-mongodb-persistent-template.json deleted file mode 100644 index cefcf3c129..0000000000 --- a/k8s/openshift/examples/v3.7/db-templates/openebs-mongodb-persistent-template.json +++ /dev/null @@ -1,306 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-mongodb-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS MongoDB (Persistent)", - "description": "MongoDB database service, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.\n\nNOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-mongodb", - "tags": "database,mongodb", - "openshift.io/long-description": "This template provides a standalone MongoDB server with a database created. The database is stored on persistent storage provided by OpenEBS. The database name, username, and password are chosen via parameters when provisioning this service.", - "openshift.io/provider-display-name": "Red Hat, Inc.", - "openshift.io/documentation-url": "https://docs.openshift.org/latest/using_images/db_images/mongodb.html", - "openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Username: ${MONGODB_USER}\n Password: ${MONGODB_PASSWORD}\n Database Name: ${MONGODB_DATABASE}\n Connection URL: mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@${DATABASE_SERVICE_NAME}/${MONGODB_DATABASE}\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.", - "labels": { - "template": "mongodb-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-username": "{.data['database-user']}", - "template.openshift.io/expose-password": "{.data['database-password']}", - "template.openshift.io/expose-admin_password": "{.data['database-admin-password']}", - "template.openshift.io/expose-database_name": "{.data['database-name']}" - } - }, - "stringData" : { - "database-user" : "${MONGODB_USER}", - "database-password" : "${MONGODB_PASSWORD}", - "database-admin-password" : "${MONGODB_ADMIN_PASSWORD}", - "database-name" : "${MONGODB_DATABASE}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "mongodb://{.spec.clusterIP}:{.spec.ports[?(.name==\"mongo\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "mongo", - "protocol": "TCP", - "port": 27017, - "targetPort": 27017, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "mongodb" - ], - "from": { - "kind": "ImageStreamTag", - "name": "mongodb:${MONGODB_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "mongodb", - "image": " ", - "ports": [ - { - "containerPort": 27017, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 3, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD --eval=\"quit()\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 27017 - } - }, - "env": [ - { - "name": "MONGODB_USER", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-user" - } - } - }, - { - "name": "MONGODB_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-admin-password" - } - } - }, - { - "name": "MONGODB_DATABASE", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-name" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/mongodb/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "mongodb", - "required": true - }, - { - "name": "MONGODB_USER", - "displayName": "MongoDB Connection Username", - "description": "Username for MongoDB user that will be used for accessing the database.", - "generate": "expression", - "from": "user[A-Z0-9]{3}", - "required": true - }, - { - "name": "MONGODB_PASSWORD", - "displayName": "MongoDB Connection Password", - "description": "Password for the MongoDB connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "MONGODB_DATABASE", - "displayName": "MongoDB Database Name", - "description": "Name of the MongoDB database accessed.", - "value": "sampledb", - "required": true - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "displayName": "MongoDB Admin Password", - "description": "Password for the database admin user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "MONGODB_VERSION", - "displayName": "Version of MongoDB Image", - "description": "Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/examples/v3.7/db-templates/openebs-redis-persistent-template.json b/k8s/openshift/examples/v3.7/db-templates/openebs-redis-persistent-template.json deleted file mode 100644 index e659bc07f2..0000000000 --- a/k8s/openshift/examples/v3.7/db-templates/openebs-redis-persistent-template.json +++ /dev/null @@ -1,250 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-redis-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS Redis (Persistent)", - "description": "Redis in-memory data structure store, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.\n\nNOTE: You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-redis", - "tags": "database,redis", - "openshift.io/long-description": "This template provides a standalone Redis server. The data is stored on persistent storage provided by OpenEBS.", - "openshift.io/provider-display-name": "Red Hat, Inc.", - "openshift.io/documentation-url": "https://github.com/sclorg/redis-container/tree/master/3.2", - "openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Password: ${REDIS_PASSWORD}\n Connection URL: redis://${DATABASE_SERVICE_NAME}:6379/\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.", - "labels": { - "template": "redis-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-password": "{.data['database-password']}" - } - }, - "stringData" : { - "database-password" : "${REDIS_PASSWORD}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "redis://{.spec.clusterIP}:{.spec.ports[?(.name==\"redis\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "redis", - "protocol": "TCP", - "port": 6379, - "targetPort": 6379, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "redis" - ], - "from": { - "kind": "ImageStreamTag", - "name": "redis:${REDIS_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "redis", - "image": " ", - "ports": [ - { - "containerPort": 6379, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 5, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "test \"$(redis-cli -h 127.0.0.1 -a $REDIS_PASSWORD ping)\" == \"PONG\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 6379 - } - }, - "env": [ - { - "name": "REDIS_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/redis/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "redis", - "required": true - }, - { - "name": "REDIS_PASSWORD", - "displayName": "Redis Connection Password", - "description": "Password for the Redis connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "REDIS_VERSION", - "displayName": "Version of Redis Image", - "description": "Version of Redis image to be used (3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/examples/v3.8/db-templates/README.md b/k8s/openshift/examples/v3.8/db-templates/README.md deleted file mode 100644 index 87438cf14b..0000000000 --- a/k8s/openshift/examples/v3.8/db-templates/README.md +++ /dev/null @@ -1,53 +0,0 @@ -OpenShift 3 Database Examples -============================= - -This directory contains example JSON templates to deploy databases in OpenShift -on OpenEBS volumes. They can be used to immediately instantiate a database and -expose it as a service in the current project, or to add a template that can be -later used from the Web Console or the CLI. - -The examples can also be tweaked to create new templates. - - -## Usage - -### Instantiating a new database service - -Use these instructions if you want to quickly deploy a new database service in -your current project. Instantiate a new database service with this command: - - $ oc new-app /path/to/template.json - -Replace `/path/to/template.json` with an appropriate path, that can be either a -local path or an URL. Example: - - $ oc new-app https://raw.githubusercontent.com/openebs/openebs/master/k8s/openshift/examples/db-templates/openebs-mongodb-persistent-template.json - -The parameters listed in the output above can be tweaked by specifying values in -the command line with the `-p` option: - - $ oc new-app examples/db-templates/openebs-mongodb-persistent-template.json -p DATABASE_SERVICE_NAME=mydb -p MONGODB_USER=default -p STORAGE_CLASS_NAME=openebs-default - -### Deleting a new database service - -Use these instructions when you need to completely delete the app and persistent -volumes in your current project. Separately delete your database service and -persistentvolumeclaim using the commands below: - - $ oc delete all -l app=openebs-mongodb-persistent - -Use "oc get pvc" command to find your persistentvolumeclaim name: - - $ oc get pvc - -Delete the pvc using the correct name from the output of above command: - - $ oc delete pvc mongodb - $ oc delete secret mongodb - -## More information - -The usage of each supported database image is further documented in the links -below: - -- [MongoDB](https://docs.openshift.org/latest/using_images/db_images/mongodb.html) diff --git a/k8s/openshift/examples/v3.8/db-templates/openebs-mongodb-persistent-template.json b/k8s/openshift/examples/v3.8/db-templates/openebs-mongodb-persistent-template.json deleted file mode 100644 index cefcf3c129..0000000000 --- a/k8s/openshift/examples/v3.8/db-templates/openebs-mongodb-persistent-template.json +++ /dev/null @@ -1,306 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-mongodb-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS MongoDB (Persistent)", - "description": "MongoDB database service, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.\n\nNOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-mongodb", - "tags": "database,mongodb", - "openshift.io/long-description": "This template provides a standalone MongoDB server with a database created. The database is stored on persistent storage provided by OpenEBS. The database name, username, and password are chosen via parameters when provisioning this service.", - "openshift.io/provider-display-name": "Red Hat, Inc.", - "openshift.io/documentation-url": "https://docs.openshift.org/latest/using_images/db_images/mongodb.html", - "openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Username: ${MONGODB_USER}\n Password: ${MONGODB_PASSWORD}\n Database Name: ${MONGODB_DATABASE}\n Connection URL: mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@${DATABASE_SERVICE_NAME}/${MONGODB_DATABASE}\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.", - "labels": { - "template": "mongodb-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-username": "{.data['database-user']}", - "template.openshift.io/expose-password": "{.data['database-password']}", - "template.openshift.io/expose-admin_password": "{.data['database-admin-password']}", - "template.openshift.io/expose-database_name": "{.data['database-name']}" - } - }, - "stringData" : { - "database-user" : "${MONGODB_USER}", - "database-password" : "${MONGODB_PASSWORD}", - "database-admin-password" : "${MONGODB_ADMIN_PASSWORD}", - "database-name" : "${MONGODB_DATABASE}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "mongodb://{.spec.clusterIP}:{.spec.ports[?(.name==\"mongo\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "mongo", - "protocol": "TCP", - "port": 27017, - "targetPort": 27017, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "mongodb" - ], - "from": { - "kind": "ImageStreamTag", - "name": "mongodb:${MONGODB_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "mongodb", - "image": " ", - "ports": [ - { - "containerPort": 27017, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 3, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD --eval=\"quit()\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 27017 - } - }, - "env": [ - { - "name": "MONGODB_USER", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-user" - } - } - }, - { - "name": "MONGODB_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-admin-password" - } - } - }, - { - "name": "MONGODB_DATABASE", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-name" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/mongodb/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "mongodb", - "required": true - }, - { - "name": "MONGODB_USER", - "displayName": "MongoDB Connection Username", - "description": "Username for MongoDB user that will be used for accessing the database.", - "generate": "expression", - "from": "user[A-Z0-9]{3}", - "required": true - }, - { - "name": "MONGODB_PASSWORD", - "displayName": "MongoDB Connection Password", - "description": "Password for the MongoDB connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "MONGODB_DATABASE", - "displayName": "MongoDB Database Name", - "description": "Name of the MongoDB database accessed.", - "value": "sampledb", - "required": true - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "displayName": "MongoDB Admin Password", - "description": "Password for the database admin user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "MONGODB_VERSION", - "displayName": "Version of MongoDB Image", - "description": "Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/examples/v3.8/db-templates/openebs-redis-persistent-template.json b/k8s/openshift/examples/v3.8/db-templates/openebs-redis-persistent-template.json deleted file mode 100644 index e659bc07f2..0000000000 --- a/k8s/openshift/examples/v3.8/db-templates/openebs-redis-persistent-template.json +++ /dev/null @@ -1,250 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-redis-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS Redis (Persistent)", - "description": "Redis in-memory data structure store, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.\n\nNOTE: You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-redis", - "tags": "database,redis", - "openshift.io/long-description": "This template provides a standalone Redis server. The data is stored on persistent storage provided by OpenEBS.", - "openshift.io/provider-display-name": "Red Hat, Inc.", - "openshift.io/documentation-url": "https://github.com/sclorg/redis-container/tree/master/3.2", - "openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Password: ${REDIS_PASSWORD}\n Connection URL: redis://${DATABASE_SERVICE_NAME}:6379/\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.", - "labels": { - "template": "redis-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-password": "{.data['database-password']}" - } - }, - "stringData" : { - "database-password" : "${REDIS_PASSWORD}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "redis://{.spec.clusterIP}:{.spec.ports[?(.name==\"redis\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "redis", - "protocol": "TCP", - "port": 6379, - "targetPort": 6379, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "redis" - ], - "from": { - "kind": "ImageStreamTag", - "name": "redis:${REDIS_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "redis", - "image": " ", - "ports": [ - { - "containerPort": 6379, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 5, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "test \"$(redis-cli -h 127.0.0.1 -a $REDIS_PASSWORD ping)\" == \"PONG\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 6379 - } - }, - "env": [ - { - "name": "REDIS_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/redis/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "redis", - "required": true - }, - { - "name": "REDIS_PASSWORD", - "displayName": "Redis Connection Password", - "description": "Password for the Redis connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "REDIS_VERSION", - "displayName": "Version of Redis Image", - "description": "Version of Redis image to be used (3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/examples/v3.9/db-templates/README.md b/k8s/openshift/examples/v3.9/db-templates/README.md deleted file mode 100644 index 87438cf14b..0000000000 --- a/k8s/openshift/examples/v3.9/db-templates/README.md +++ /dev/null @@ -1,53 +0,0 @@ -OpenShift 3 Database Examples -============================= - -This directory contains example JSON templates to deploy databases in OpenShift -on OpenEBS volumes. They can be used to immediately instantiate a database and -expose it as a service in the current project, or to add a template that can be -later used from the Web Console or the CLI. - -The examples can also be tweaked to create new templates. - - -## Usage - -### Instantiating a new database service - -Use these instructions if you want to quickly deploy a new database service in -your current project. Instantiate a new database service with this command: - - $ oc new-app /path/to/template.json - -Replace `/path/to/template.json` with an appropriate path, that can be either a -local path or an URL. Example: - - $ oc new-app https://raw.githubusercontent.com/openebs/openebs/master/k8s/openshift/examples/db-templates/openebs-mongodb-persistent-template.json - -The parameters listed in the output above can be tweaked by specifying values in -the command line with the `-p` option: - - $ oc new-app examples/db-templates/openebs-mongodb-persistent-template.json -p DATABASE_SERVICE_NAME=mydb -p MONGODB_USER=default -p STORAGE_CLASS_NAME=openebs-default - -### Deleting a new database service - -Use these instructions when you need to completely delete the app and persistent -volumes in your current project. Separately delete your database service and -persistentvolumeclaim using the commands below: - - $ oc delete all -l app=openebs-mongodb-persistent - -Use "oc get pvc" command to find your persistentvolumeclaim name: - - $ oc get pvc - -Delete the pvc using the correct name from the output of above command: - - $ oc delete pvc mongodb - $ oc delete secret mongodb - -## More information - -The usage of each supported database image is further documented in the links -below: - -- [MongoDB](https://docs.openshift.org/latest/using_images/db_images/mongodb.html) diff --git a/k8s/openshift/examples/v3.9/db-templates/openebs-mongodb-persistent-template.json b/k8s/openshift/examples/v3.9/db-templates/openebs-mongodb-persistent-template.json deleted file mode 100644 index cefcf3c129..0000000000 --- a/k8s/openshift/examples/v3.9/db-templates/openebs-mongodb-persistent-template.json +++ /dev/null @@ -1,306 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-mongodb-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS MongoDB (Persistent)", - "description": "MongoDB database service, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.\n\nNOTE: Scaling to more than one replica is not supported. You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-mongodb", - "tags": "database,mongodb", - "openshift.io/long-description": "This template provides a standalone MongoDB server with a database created. The database is stored on persistent storage provided by OpenEBS. The database name, username, and password are chosen via parameters when provisioning this service.", - "openshift.io/provider-display-name": "Red Hat, Inc.", - "openshift.io/documentation-url": "https://docs.openshift.org/latest/using_images/db_images/mongodb.html", - "openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Username: ${MONGODB_USER}\n Password: ${MONGODB_PASSWORD}\n Database Name: ${MONGODB_DATABASE}\n Connection URL: mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@${DATABASE_SERVICE_NAME}/${MONGODB_DATABASE}\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.", - "labels": { - "template": "mongodb-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-username": "{.data['database-user']}", - "template.openshift.io/expose-password": "{.data['database-password']}", - "template.openshift.io/expose-admin_password": "{.data['database-admin-password']}", - "template.openshift.io/expose-database_name": "{.data['database-name']}" - } - }, - "stringData" : { - "database-user" : "${MONGODB_USER}", - "database-password" : "${MONGODB_PASSWORD}", - "database-admin-password" : "${MONGODB_ADMIN_PASSWORD}", - "database-name" : "${MONGODB_DATABASE}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "mongodb://{.spec.clusterIP}:{.spec.ports[?(.name==\"mongo\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "mongo", - "protocol": "TCP", - "port": 27017, - "targetPort": 27017, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "mongodb" - ], - "from": { - "kind": "ImageStreamTag", - "name": "mongodb:${MONGODB_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "mongodb", - "image": " ", - "ports": [ - { - "containerPort": 27017, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 3, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD --eval=\"quit()\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 27017 - } - }, - "env": [ - { - "name": "MONGODB_USER", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-user" - } - } - }, - { - "name": "MONGODB_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-admin-password" - } - } - }, - { - "name": "MONGODB_DATABASE", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-name" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/mongodb/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "mongodb", - "required": true - }, - { - "name": "MONGODB_USER", - "displayName": "MongoDB Connection Username", - "description": "Username for MongoDB user that will be used for accessing the database.", - "generate": "expression", - "from": "user[A-Z0-9]{3}", - "required": true - }, - { - "name": "MONGODB_PASSWORD", - "displayName": "MongoDB Connection Password", - "description": "Password for the MongoDB connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "MONGODB_DATABASE", - "displayName": "MongoDB Database Name", - "description": "Name of the MongoDB database accessed.", - "value": "sampledb", - "required": true - }, - { - "name": "MONGODB_ADMIN_PASSWORD", - "displayName": "MongoDB Admin Password", - "description": "Password for the database admin user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "MONGODB_VERSION", - "displayName": "Version of MongoDB Image", - "description": "Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/examples/v3.9/db-templates/openebs-redis-persistent-template.json b/k8s/openshift/examples/v3.9/db-templates/openebs-redis-persistent-template.json deleted file mode 100644 index e659bc07f2..0000000000 --- a/k8s/openshift/examples/v3.9/db-templates/openebs-redis-persistent-template.json +++ /dev/null @@ -1,250 +0,0 @@ -{ - "kind": "Template", - "apiVersion": "v1", - "metadata": { - "name": "openebs-redis-persistent", - "annotations": { - "openshift.io/display-name": "OpenEBS Redis (Persistent)", - "description": "Redis in-memory data structure store, with persistent storage from OpenEBS. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.\n\nNOTE: You must have persistent volumes available in your cluster to use this template.", - "iconClass": "icon-redis", - "tags": "database,redis", - "openshift.io/long-description": "This template provides a standalone Redis server. The data is stored on persistent storage provided by OpenEBS.", - "openshift.io/provider-display-name": "Red Hat, Inc.", - "openshift.io/documentation-url": "https://github.com/sclorg/redis-container/tree/master/3.2", - "openshift.io/support-url": "https://access.redhat.com" - } - }, - "message": "The following service(s) have been created in your project: ${DATABASE_SERVICE_NAME}.\n\n Password: ${REDIS_PASSWORD}\n Connection URL: redis://${DATABASE_SERVICE_NAME}:6379/\n\nFor more information about using this template, including OpenShift considerations, see https://github.com/sclorg/redis-container/blob/master/3.2.", - "labels": { - "template": "redis-persistent-template" - }, - "objects": [ - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-password": "{.data['database-password']}" - } - }, - "stringData" : { - "database-password" : "${REDIS_PASSWORD}" - } - }, - { - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.openshift.io/expose-uri": "redis://{.spec.clusterIP}:{.spec.ports[?(.name==\"redis\")].port}" - } - }, - "spec": { - "ports": [ - { - "name": "redis", - "protocol": "TCP", - "port": 6379, - "targetPort": 6379, - "nodePort": 0 - } - ], - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - }, - "status": { - "loadBalancer": {} - } - }, - { - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}" - } - }, - "spec": { - "accessModes": [ - "ReadWriteOnce" - ], - "resources": { - "requests": { - "storage": "${VOLUME_CAPACITY}" - } - } - } - }, - { - "kind": "DeploymentConfig", - "apiVersion": "v1", - "metadata": { - "name": "${DATABASE_SERVICE_NAME}", - "annotations": { - "template.alpha.openshift.io/wait-for-ready": "true" - } - }, - "spec": { - "strategy": { - "type": "Recreate" - }, - "triggers": [ - { - "type": "ImageChange", - "imageChangeParams": { - "automatic": true, - "containerNames": [ - "redis" - ], - "from": { - "kind": "ImageStreamTag", - "name": "redis:${REDIS_VERSION}", - "namespace": "${NAMESPACE}" - }, - "lastTriggeredImage": "" - } - }, - { - "type": "ConfigChange" - } - ], - "replicas": 1, - "selector": { - "name": "${DATABASE_SERVICE_NAME}" - }, - "template": { - "metadata": { - "labels": { - "name": "${DATABASE_SERVICE_NAME}" - } - }, - "spec": { - "containers": [ - { - "name": "redis", - "image": " ", - "ports": [ - { - "containerPort": 6379, - "protocol": "TCP" - } - ], - "readinessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 5, - "exec": { - "command": [ "/bin/sh", "-i", "-c", "test \"$(redis-cli -h 127.0.0.1 -a $REDIS_PASSWORD ping)\" == \"PONG\""] - } - }, - "livenessProbe": { - "timeoutSeconds": 1, - "initialDelaySeconds": 30, - "tcpSocket": { - "port": 6379 - } - }, - "env": [ - { - "name": "REDIS_PASSWORD", - "valueFrom": { - "secretKeyRef" : { - "name" : "${DATABASE_SERVICE_NAME}", - "key" : "database-password" - } - } - } - ], - "resources": { - "limits": { - "memory": "${MEMORY_LIMIT}" - } - }, - "volumeMounts": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "mountPath": "/var/lib/redis/data" - } - ], - "terminationMessagePath": "/dev/termination-log", - "imagePullPolicy": "IfNotPresent", - "capabilities": {}, - "securityContext": { - "capabilities": {}, - "privileged": false - } - } - ], - "volumes": [ - { - "name": "${DATABASE_SERVICE_NAME}-data", - "persistentVolumeClaim": { - "claimName": "${DATABASE_SERVICE_NAME}" - } - } - ], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - } - }, - "status": {} - } - ], - "parameters": [ - { - "name": "MEMORY_LIMIT", - "displayName": "Memory Limit", - "description": "Maximum amount of memory the container can use.", - "value": "512Mi", - "required": true - }, - { - "name": "NAMESPACE", - "displayName": "Namespace", - "description": "The OpenShift Namespace where the ImageStream resides.", - "value": "openshift" - }, - { - "name": "DATABASE_SERVICE_NAME", - "displayName": "Database Service Name", - "description": "The name of the OpenShift Service exposed for the database.", - "value": "redis", - "required": true - }, - { - "name": "REDIS_PASSWORD", - "displayName": "Redis Connection Password", - "description": "Password for the Redis connection user.", - "generate": "expression", - "from": "[a-zA-Z0-9]{16}", - "required": true - }, - { - "name": "VOLUME_CAPACITY", - "displayName": "Volume Capacity", - "description": "Volume space available for data, e.g. 512Mi, 2Gi.", - "value": "1Gi", - "required": true - }, - { - "name": "STORAGE_CLASS_NAME", - "displayName": "Storage Class", - "description": "Storage Class for Persistent Volume", - "value": "openebs-standard", - "required": true - }, - { - "name": "REDIS_VERSION", - "displayName": "Version of Redis Image", - "description": "Version of Redis image to be used (3.2 or latest).", - "value": "3.2", - "required": true - } - ] -} diff --git a/k8s/openshift/minishift/README.md b/k8s/openshift/minishift/README.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/k8s/sample-loki-templates.md b/k8s/sample-loki-templates.md deleted file mode 100644 index 11fe858bb8..0000000000 --- a/k8s/sample-loki-templates.md +++ /dev/null @@ -1,1039 +0,0 @@ -## Sample Template Specifications for Grafana Loki - -``` -[debug] Created tunnel using local port: '41237' - -[debug] SERVER: "127.0.0.1:41237" - -[debug] Fetched loki/loki-stack to /root/.helm/cache/archive/loki-stack-0.16.0.tgz - -Release "loki" does not exist. Installing it now. -[debug] CHART PATH: /root/.helm/cache/archive/loki-stack-0.16.0.tgz - -NAME: loki -REVISION: 1 -RELEASED: Wed Aug 21 11:52:27 2019 -CHART: loki-stack-0.16.0 -USER-SUPPLIED VALUES: -{} - -COMPUTED VALUES: -grafana: - enabled: false - image: - tag: 6.3.0-beta2 - sidecar: - datasources: - enabled: true -loki: - affinity: {} - annotations: {} - config: - auth_enabled: false - chunk_store_config: - max_look_back_period: 0 - ingester: - chunk_block_size: 262144 - chunk_idle_period: 15m - lifecycler: - ring: - kvstore: - store: inmemory - replication_factor: 1 - limits_config: - enforce_metric_name: false - reject_old_samples: true - reject_old_samples_max_age: 168h - schema_config: - configs: - - from: "2018-04-15" - index: - period: 168h - prefix: index_ - object_store: filesystem - schema: v9 - store: boltdb - server: - http_listen_port: 3100 - storage_config: - boltdb: - directory: /data/loki/index - filesystem: - directory: /data/loki/chunks - table_manager: - retention_deletes_enabled: false - retention_period: 0 - enabled: true - extraArgs: {} - global: {} - image: - pullPolicy: IfNotPresent - repository: grafana/loki - tag: v0.3.0 - livenessProbe: - httpGet: - path: /ready - port: http-metrics - initialDelaySeconds: 45 - networkPolicy: - enabled: false - nodeSelector: {} - persistence: - accessModes: - - ReadWriteOnce - annotations: {} - enabled: false - size: 10Gi - storageClassName: default - podAnnotations: - prometheus.io/port: http-metrics - prometheus.io/scrape: "true" - podDisruptionBudget: {} - podLabels: {} - podManagementPolicy: OrderedReady - rbac: - create: true - pspEnabled: true - readinessProbe: - httpGet: - path: /ready - port: http-metrics - initialDelaySeconds: 45 - replicas: 1 - resources: {} - securityContext: - fsGroup: 10001 - runAsGroup: 10001 - runAsNonRoot: true - runAsUser: 10001 - service: - annotations: {} - labels: {} - nodePort: null - port: 3100 - type: ClusterIP - serviceAccount: - create: true - name: null - serviceMonitor: - enabled: false - interval: "" - terminationGracePeriodSeconds: 30 - tolerations: [] - tracing: - jaegerAgentHost: null - updateStrategy: - type: RollingUpdate -prometheus: - enabled: false - server: - fullnameOverride: prometheus-server -promtail: - affinity: {} - annotations: {} - config: - client: - backoff_config: - maxbackoff: 5s - maxretries: 5 - minbackoff: 100ms - batchsize: 102400 - batchwait: 1s - external_labels: {} - timeout: 10s - positions: - filename: /run/promtail/positions.yaml - server: - http_listen_port: 3101 - target_config: - sync_period: 10s - deploymentStrategy: RollingUpdate - enabled: true - global: {} - image: - pullPolicy: IfNotPresent - repository: grafana/promtail - tag: v0.3.0 - livenessProbe: {} - loki: - serviceName: "" - servicePort: 3100 - serviceScheme: http - nameOverride: promtail - nodeSelector: {} - pipelineStages: - - docker: {} - podAnnotations: - prometheus.io/port: http-metrics - prometheus.io/scrape: "true" - podLabels: {} - rbac: - create: true - pspEnabled: true - readinessProbe: - failureThreshold: 5 - httpGet: - path: /ready - port: http-metrics - initialDelaySeconds: 10 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - resources: {} - scrapeConfigs: [] - securityContext: - readOnlyRootFilesystem: true - runAsGroup: 0 - runAsUser: 0 - serviceAccount: - create: true - name: null - serviceMonitor: - enabled: false - interval: "" - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/master - volumeMounts: - - mountPath: /var/lib/docker/containers - name: docker - readOnly: true - - mountPath: /var/log/pods - name: pods - readOnly: true - volumes: - - hostPath: - path: /var/lib/docker/containers - name: docker - - hostPath: - path: /var/log/pods - name: pods - -HOOKS: ---- -# loki-loki-stack-test -apiVersion: v1 -kind: Pod -metadata: - annotations: - "helm.sh/hook": test-success - labels: - app: loki-stack - chart: loki-stack-0.16.0 - release: loki - heritage: Tiller - name: loki-loki-stack-test -spec: - containers: - - name: test - image: bats/bats:v1.1.0 - args: - - /var/lib/loki/test.sh - env: - - name: LOKI_SERVICE - value: loki - - name: LOKI_PORT - value: "3100" - volumeMounts: - - name: tests - mountPath: /var/lib/loki - restartPolicy: Never - volumes: - - name: tests - configMap: - name: loki-loki-stack-test -MANIFEST: - ---- -# Source: loki-stack/charts/loki/templates/podsecuritypolicy.yaml -apiVersion: policy/v1beta1 -kind: PodSecurityPolicy -metadata: - name: loki - namespace: openebs - labels: - app: loki - chart: loki-0.14.0 - heritage: Tiller - release: loki -spec: - privileged: false - allowPrivilegeEscalation: false - volumes: - - 'configMap' - - 'emptyDir' - - 'persistentVolumeClaim' - - 'secret' - hostNetwork: false - hostIPC: false - hostPID: false - runAsUser: - rule: 'MustRunAsNonRoot' - seLinux: - rule: 'RunAsAny' - supplementalGroups: - rule: 'MustRunAs' - ranges: - - min: 1 - max: 65535 - fsGroup: - rule: 'MustRunAs' - ranges: - - min: 1 - max: 65535 - readOnlyRootFilesystem: true - requiredDropCapabilities: - - ALL ---- -# Source: loki-stack/charts/promtail/templates/podsecuritypolicy.yaml -apiVersion: policy/v1beta1 -kind: PodSecurityPolicy -metadata: - name: loki-promtail - namespace: openebs - labels: - app: promtail - chart: promtail-0.12.0 - heritage: Tiller - release: loki -spec: - privileged: false - allowPrivilegeEscalation: false - volumes: - - 'secret' - - 'configMap' - - 'hostPath' - hostNetwork: false - hostIPC: false - hostPID: false - runAsUser: - rule: 'RunAsAny' - seLinux: - rule: 'RunAsAny' - supplementalGroups: - rule: 'RunAsAny' - fsGroup: - rule: 'RunAsAny' - readOnlyRootFilesystem: true - requiredDropCapabilities: - - ALL ---- -# Source: loki-stack/charts/loki/templates/secret.yaml -apiVersion: v1 -kind: Secret -metadata: - name: loki - namespace: openebs - labels: - app: loki - chart: loki-0.14.0 - release: loki - heritage: Tiller -data: - loki.yaml: YXV0aF9lbmFibGVkOiBmYWxzZQpjaHVua19zdG9yZV9jb25maWc6CiAgbWF4X2xvb2tfYmFja19wZXJpb2Q6IDAKaW5nZXN0ZXI6CiAgY2h1bmtfYmxvY2tfc2l6ZTogMjYyMTQ0CiAgY2h1bmtfaWRsZV9wZXJpb2Q6IDE1bQogIGxpZmVjeWNsZXI6CiAgICByaW5nOgogICAgICBrdnN0b3JlOgogICAgICAgIHN0b3JlOiBpbm1lbW9yeQogICAgICByZXBsaWNhdGlvbl9mYWN0b3I6IDEKbGltaXRzX2NvbmZpZzoKICBlbmZvcmNlX21ldHJpY19uYW1lOiBmYWxzZQogIHJlamVjdF9vbGRfc2FtcGxlczogdHJ1ZQogIHJlamVjdF9vbGRfc2FtcGxlc19tYXhfYWdlOiAxNjhoCnNjaGVtYV9jb25maWc6CiAgY29uZmlnczoKICAtIGZyb206ICIyMDE4LTA0LTE1IgogICAgaW5kZXg6CiAgICAgIHBlcmlvZDogMTY4aAogICAgICBwcmVmaXg6IGluZGV4XwogICAgb2JqZWN0X3N0b3JlOiBmaWxlc3lzdGVtCiAgICBzY2hlbWE6IHY5CiAgICBzdG9yZTogYm9sdGRiCnNlcnZlcjoKICBodHRwX2xpc3Rlbl9wb3J0OiAzMTAwCnN0b3JhZ2VfY29uZmlnOgogIGJvbHRkYjoKICAgIGRpcmVjdG9yeTogL2RhdGEvbG9raS9pbmRleAogIGZpbGVzeXN0ZW06CiAgICBkaXJlY3Rvcnk6IC9kYXRhL2xva2kvY2h1bmtzCnRhYmxlX21hbmFnZXI6CiAgcmV0ZW50aW9uX2RlbGV0ZXNfZW5hYmxlZDogZmFsc2UKICByZXRlbnRpb25fcGVyaW9kOiAwCg== ---- -# Source: loki-stack/charts/promtail/templates/configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: loki-promtail - namespace: openebs - labels: - app: promtail - chart: promtail-0.12.0 - release: loki - heritage: Tiller -data: - promtail.yaml: | - client: - backoff_config: - maxbackoff: 5s - maxretries: 5 - minbackoff: 100ms - batchsize: 102400 - batchwait: 1s - external_labels: {} - timeout: 10s - positions: - filename: /run/promtail/positions.yaml - server: - http_listen_port: 3101 - target_config: - sync_period: 10s - - scrape_configs: - - job_name: kubernetes-pods-name - pipeline_stages: - - docker: {} - - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: - - __meta_kubernetes_pod_label_name - target_label: __service__ - - source_labels: - - __meta_kubernetes_pod_node_name - target_label: __host__ - - action: drop - regex: ^$ - source_labels: - - __service__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - action: replace - replacement: $1 - separator: / - source_labels: - - __meta_kubernetes_namespace - - __service__ - target_label: job - - action: replace - source_labels: - - __meta_kubernetes_namespace - target_label: namespace - - action: replace - source_labels: - - __meta_kubernetes_pod_name - target_label: instance - - action: replace - source_labels: - - __meta_kubernetes_pod_container_name - target_label: container_name - - replacement: /var/log/pods/*$1/*.log - separator: / - source_labels: - - __meta_kubernetes_pod_uid - - __meta_kubernetes_pod_container_name - target_label: __path__ - - job_name: kubernetes-pods-app - pipeline_stages: - - docker: {} - - kubernetes_sd_configs: - - role: pod - relabel_configs: - - action: drop - regex: .+ - source_labels: - - __meta_kubernetes_pod_label_name - - source_labels: - - __meta_kubernetes_pod_label_app - target_label: __service__ - - source_labels: - - __meta_kubernetes_pod_node_name - target_label: __host__ - - action: drop - regex: ^$ - source_labels: - - __service__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - action: replace - replacement: $1 - separator: / - source_labels: - - __meta_kubernetes_namespace - - __service__ - target_label: job - - action: replace - source_labels: - - __meta_kubernetes_namespace - target_label: namespace - - action: replace - source_labels: - - __meta_kubernetes_pod_name - target_label: instance - - action: replace - source_labels: - - __meta_kubernetes_pod_container_name - target_label: container_name - - replacement: /var/log/pods/*$1/*.log - separator: / - source_labels: - - __meta_kubernetes_pod_uid - - __meta_kubernetes_pod_container_name - target_label: __path__ - - job_name: kubernetes-pods-direct-controllers - pipeline_stages: - - docker: {} - - kubernetes_sd_configs: - - role: pod - relabel_configs: - - action: drop - regex: .+ - separator: '' - source_labels: - - __meta_kubernetes_pod_label_name - - __meta_kubernetes_pod_label_app - - action: drop - regex: ^([0-9a-z-.]+)(-[0-9a-f]{8,10})$ - source_labels: - - __meta_kubernetes_pod_controller_name - - source_labels: - - __meta_kubernetes_pod_controller_name - target_label: __service__ - - source_labels: - - __meta_kubernetes_pod_node_name - target_label: __host__ - - action: drop - regex: ^$ - source_labels: - - __service__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - action: replace - replacement: $1 - separator: / - source_labels: - - __meta_kubernetes_namespace - - __service__ - target_label: job - - action: replace - source_labels: - - __meta_kubernetes_namespace - target_label: namespace - - action: replace - source_labels: - - __meta_kubernetes_pod_name - target_label: instance - - action: replace - source_labels: - - __meta_kubernetes_pod_container_name - target_label: container_name - - replacement: /var/log/pods/*$1/*.log - separator: / - source_labels: - - __meta_kubernetes_pod_uid - - __meta_kubernetes_pod_container_name - target_label: __path__ - - job_name: kubernetes-pods-indirect-controller - pipeline_stages: - - docker: {} - - kubernetes_sd_configs: - - role: pod - relabel_configs: - - action: drop - regex: .+ - separator: '' - source_labels: - - __meta_kubernetes_pod_label_name - - __meta_kubernetes_pod_label_app - - action: keep - regex: ^([0-9a-z-.]+)(-[0-9a-f]{8,10})$ - source_labels: - - __meta_kubernetes_pod_controller_name - - action: replace - regex: ^([0-9a-z-.]+)(-[0-9a-f]{8,10})$ - source_labels: - - __meta_kubernetes_pod_controller_name - target_label: __service__ - - source_labels: - - __meta_kubernetes_pod_node_name - target_label: __host__ - - action: drop - regex: ^$ - source_labels: - - __service__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - action: replace - replacement: $1 - separator: / - source_labels: - - __meta_kubernetes_namespace - - __service__ - target_label: job - - action: replace - source_labels: - - __meta_kubernetes_namespace - target_label: namespace - - action: replace - source_labels: - - __meta_kubernetes_pod_name - target_label: instance - - action: replace - source_labels: - - __meta_kubernetes_pod_container_name - target_label: container_name - - replacement: /var/log/pods/*$1/*.log - separator: / - source_labels: - - __meta_kubernetes_pod_uid - - __meta_kubernetes_pod_container_name - target_label: __path__ - - job_name: kubernetes-pods-static - pipeline_stages: - - docker: {} - - kubernetes_sd_configs: - - role: pod - relabel_configs: - - action: drop - regex: ^$ - source_labels: - - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror - - action: replace - source_labels: - - __meta_kubernetes_pod_label_component - target_label: __service__ - - source_labels: - - __meta_kubernetes_pod_node_name - target_label: __host__ - - action: drop - regex: ^$ - source_labels: - - __service__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - action: replace - replacement: $1 - separator: / - source_labels: - - __meta_kubernetes_namespace - - __service__ - target_label: job - - action: replace - source_labels: - - __meta_kubernetes_namespace - target_label: namespace - - action: replace - source_labels: - - __meta_kubernetes_pod_name - target_label: instance - - action: replace - source_labels: - - __meta_kubernetes_pod_container_name - target_label: container_name - - replacement: /var/log/pods/*$1/*.log - separator: / - source_labels: - - __meta_kubernetes_pod_annotation_kubernetes_io_config_mirror - - __meta_kubernetes_pod_container_name - target_label: __path__ ---- -# Source: loki-stack/templates/tests/loki-test-configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: loki-loki-stack-test - labels: - app: loki-stack - chart: loki-stack-0.16.0 - release: loki - heritage: Tiller -data: - test.sh: | - #!/usr/bin/env bash - - LOKI_URI="http://${LOKI_SERVICE}:${LOKI_PORT}" - - function setup() { - apk add -u curl jq - until (curl -s ${LOKI_URI}/api/prom/label/app/values | jq -e '.values[] | select(. == "loki")'); do - sleep 1 - done - } - - @test "Has labels" { - curl -s ${LOKI_URI}/api/prom/label | \ - jq -e '.values[] | select(. == "app")' - } - - @test "Query log entry" { - curl -sG ${LOKI_URI}/api/prom/query?limit=10 --data-urlencode 'query={app="loki"}' | \ - jq -e '.streams[].entries | length >= 1' - } - - @test "Push log entry" { - local timestamp=$(date -Iseconds -u | sed 's/UTC/.000000000+00:00/') - local data=$(jq -n --arg timestamp "${timestamp}" '{"streams": [{"labels": "{app=\"loki-test\"}", "entries": [{"ts": $timestamp, "line": "foobar"}]}]}') - - curl -s -X POST -H "Content-Type: application/json" ${LOKI_URI}/api/prom/push -d "${data}" - - curl -sG ${LOKI_URI}/api/prom/query?limit=1 --data-urlencode 'query={app="loki-test"}' | \ - jq -e '.streams[].entries[].line == "foobar"' - } ---- -# Source: loki-stack/charts/loki/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - app: loki - chart: loki-0.14.0 - heritage: Tiller - release: loki - name: loki - namespace: openebs ---- -# Source: loki-stack/charts/promtail/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - app: promtail - chart: promtail-0.12.0 - heritage: Tiller - release: loki - name: loki-promtail - namespace: openebs ---- -# Source: loki-stack/charts/promtail/templates/clusterrole.yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - labels: - app: promtail - chart: promtail-0.12.0 - release: loki - heritage: Tiller - name: loki-promtail-clusterrole - namespace: openebs -rules: -- apiGroups: [""] # "" indicates the core API group - resources: - - nodes - - nodes/proxy - - services - - endpoints - - pods - verbs: ["get", "watch", "list"] ---- -# Source: loki-stack/charts/promtail/templates/clusterrolebinding.yaml -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: loki-promtail-clusterrolebinding - labels: - app: promtail - chart: promtail-0.12.0 - release: loki - heritage: Tiller -subjects: - - kind: ServiceAccount - name: loki-promtail - namespace: openebs -roleRef: - kind: ClusterRole - name: loki-promtail-clusterrole - apiGroup: rbac.authorization.k8s.io ---- -# Source: loki-stack/charts/loki/templates/role.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: loki - namespace: openebs - labels: - app: loki - chart: loki-0.14.0 - heritage: Tiller - release: loki -rules: -- apiGroups: ['extensions'] - resources: ['podsecuritypolicies'] - verbs: ['use'] - resourceNames: [loki] ---- -# Source: loki-stack/charts/promtail/templates/role.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: loki-promtail - namespace: openebs - labels: - app: promtail - chart: promtail-0.12.0 - heritage: Tiller - release: loki -rules: -- apiGroups: ['extensions'] - resources: ['podsecuritypolicies'] - verbs: ['use'] - resourceNames: [loki-promtail] ---- -# Source: loki-stack/charts/loki/templates/rolebinding.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: loki - namespace: openebs - labels: - app: loki - chart: loki-0.14.0 - heritage: Tiller - release: loki -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: loki -subjects: -- kind: ServiceAccount - name: loki ---- -# Source: loki-stack/charts/promtail/templates/rolebinding.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: loki-promtail - namespace: openebs - labels: - app: promtail - chart: promtail-0.12.0 - heritage: Tiller - release: loki -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: loki-promtail -subjects: -- kind: ServiceAccount - name: loki-promtail ---- -# Source: loki-stack/charts/loki/templates/service-headless.yaml -apiVersion: v1 -kind: Service -metadata: - name: loki-headless - namespace: openebs - labels: - app: loki - chart: loki-0.14.0 - release: loki - heritage: Tiller -spec: - clusterIP: None - ports: - - port: 3100 - protocol: TCP - name: http-metrics - targetPort: http-metrics - selector: - app: loki - release: loki ---- -# Source: loki-stack/charts/loki/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: loki - namespace: openebs - labels: - app: loki - chart: loki-0.14.0 - release: loki - heritage: Tiller - annotations: - {} - -spec: - type: ClusterIP - ports: - - port: 3100 - protocol: TCP - name: http-metrics - targetPort: http-metrics - selector: - app: loki - release: loki ---- -# Source: loki-stack/charts/promtail/templates/daemonset.yaml -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: loki-promtail - namespace: openebs - labels: - app: promtail - chart: promtail-0.12.0 - release: loki - heritage: Tiller - annotations: - {} - -spec: - selector: - matchLabels: - app: promtail - release: loki - updateStrategy: - type: RollingUpdate - template: - metadata: - labels: - app: promtail - release: loki - annotations: - checksum/config: d63942530bd9af428b5110aab9356055b1f53b133ecf32972a5d3ccb561b2df9 - prometheus.io/port: http-metrics - prometheus.io/scrape: "true" - - spec: - serviceAccountName: loki-promtail - containers: - - name: promtail - image: "grafana/promtail:v0.3.0" - imagePullPolicy: IfNotPresent - args: - - "-config.file=/etc/promtail/promtail.yaml" - - "-client.url=http://loki:3100/api/prom/push" - volumeMounts: - - name: config - mountPath: /etc/promtail - - name: run - mountPath: /run/promtail - - mountPath: /var/lib/docker/containers - name: docker - readOnly: true - - mountPath: /var/log/pods - name: pods - readOnly: true - - env: - - name: HOSTNAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - ports: - - containerPort: 3101 - name: http-metrics - securityContext: - readOnlyRootFilesystem: true - runAsGroup: 0 - runAsUser: 0 - - readinessProbe: - failureThreshold: 5 - httpGet: - path: /ready - port: http-metrics - initialDelaySeconds: 10 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - - resources: - {} - - nodeSelector: - {} - - affinity: - {} - - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/master - - volumes: - - name: config - configMap: - name: loki-promtail - - name: run - hostPath: - path: /run/promtail - - hostPath: - path: /var/lib/docker/containers - name: docker - - hostPath: - path: /var/log/pods - name: pods ---- -# Source: loki-stack/charts/loki/templates/statefulset.yaml -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: loki - namespace: openebs - labels: - app: loki - chart: loki-0.14.0 - release: loki - heritage: Tiller - annotations: - {} - -spec: - podManagementPolicy: OrderedReady - replicas: 1 - selector: - matchLabels: - app: loki - release: loki - serviceName: loki-headless - updateStrategy: - type: RollingUpdate - - template: - metadata: - labels: - app: loki - name: loki - release: loki - annotations: - checksum/config: 79e481cd6dd118d637642a93d50ce8ad63f19edcb04a47e06485cfc544ff7105 - prometheus.io/port: http-metrics - prometheus.io/scrape: "true" - - spec: - serviceAccountName: loki - securityContext: - fsGroup: 10001 - runAsGroup: 10001 - runAsNonRoot: true - runAsUser: 10001 - - containers: - - name: loki - image: "grafana/loki:v0.3.0" - imagePullPolicy: IfNotPresent - args: - - "-config.file=/etc/loki/loki.yaml" - volumeMounts: - - name: config - mountPath: /etc/loki - - name: storage - mountPath: "/data" - subPath: - ports: - - name: http-metrics - containerPort: 3100 - protocol: TCP - livenessProbe: - httpGet: - path: /ready - port: http-metrics - initialDelaySeconds: 45 - - readinessProbe: - httpGet: - path: /ready - port: http-metrics - initialDelaySeconds: 45 - - resources: - {} - - securityContext: - readOnlyRootFilesystem: true - env: - nodeSelector: - {} - - affinity: - {} - - tolerations: - [] - - terminationGracePeriodSeconds: 30 - volumes: - - name: config - secret: - secretName: loki - - name: storage - emptyDir: {} -``` diff --git a/k8s/sample-pv-yamls/blockdeviceclaim.yaml b/k8s/sample-pv-yamls/blockdeviceclaim.yaml deleted file mode 100644 index 7cd50b61d1..0000000000 --- a/k8s/sample-pv-yamls/blockdeviceclaim.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: openebs.io/v1alpha1 -kind: BlockDeviceClaim -metadata: - name: sparse-blockdeviceclaim - namespace: openebs -spec: - ## driveType is a type of the disk attached to the node - ## example values: sparse, HDD, SSD - driveType: "sparse" - ## blockDeviceName should be specified with block device name - ## if driveType is sparse - blockDeviceName: "sparse-1234" - ## hostName is the name of the node where block device is available - ## value should be provided if driveType is other than sparse - hostName: "" - requirements: - requests: - capacity: 10Gi diff --git a/k8s/sample-pv-yamls/cspc/cspc-sparse-single.yaml b/k8s/sample-pv-yamls/cspc/cspc-sparse-single.yaml deleted file mode 100644 index 636ca45349..0000000000 --- a/k8s/sample-pv-yamls/cspc/cspc-sparse-single.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: openebs.io/v1alpha1 -kind: CStorPoolCluster -metadata: - name: sparse-cluster-auto -spec: - name: sparse-cluster-auto - type: sparse - maxPools: 1 - poolSpec: - poolType: striped - cacheFile: /var/openebs/sparse/sparse-claim-auto.cache - overProvisioning: false diff --git a/k8s/sample-pv-yamls/cspc/pvc-sparse-claim-cstor.yaml b/k8s/sample-pv-yamls/cspc/pvc-sparse-claim-cstor.yaml deleted file mode 100644 index 80e3009308..0000000000 --- a/k8s/sample-pv-yamls/cspc/pvc-sparse-claim-cstor.yaml +++ /dev/null @@ -1,34 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cstor-sparse-auto - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: CStorPoolCluster - value: "sparse-cluster-auto" - - name: ReplicaCount - value: "1" - #- name: TargetResourceLimits - # value: |- - # memory: 1Gi - # cpu: 200m - #- name: AuxResourceLimits - # value: |- - # memory: 0.5Gi - # cpu: 50m -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: cstor-vol1-1r-claim -spec: - storageClassName: openebs-cstor-sparse-auto - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-limit-resources.yaml b/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-limit-resources.yaml deleted file mode 100644 index 72c66dc661..0000000000 --- a/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-limit-resources.yaml +++ /dev/null @@ -1,32 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cstor-sparse-limits - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: StoragePoolClaim - value: "cstor-sparse-pool" - - name: TargetResourceLimits - value: |- - memory: 1Gi - cpu: 200m - - name: AuxResourceLimits - value: |- - memory: 0.5Gi - cpu: 50m -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-cstor-sc-sparse-limits-claim -spec: - storageClassName: openebs-cstor-sparse-limits - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-ns-default-1r.yaml b/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-ns-default-1r.yaml deleted file mode 100644 index b38115ab82..0000000000 --- a/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-ns-default-1r.yaml +++ /dev/null @@ -1,15 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: cstor-sc-sparse-ns-default - annotations: - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -spec: - storageClassName: openebs-cstor-sparse - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G diff --git a/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-ns-default.yaml b/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-ns-default.yaml deleted file mode 100644 index 60191d7cff..0000000000 --- a/k8s/sample-pv-yamls/pvc-cstor-sc-sparse-ns-default.yaml +++ /dev/null @@ -1,11 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: cstor-sc-sparse-ns-default -spec: - storageClassName: openebs-cstor-sparse - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-1r-raa.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-1r-raa.yaml deleted file mode 100644 index 280ea028fb..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-1r-raa.yaml +++ /dev/null @@ -1,39 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: jiva-1r-raa - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-vol1-1r-raa-claim - labels: - openebs.io/replica-anti-affinity: application-name -spec: - storageClassName: jiva-1r-raa - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-vol2-1r-raa-claim - labels: - openebs.io/replica-anti-affinity: application-name -spec: - storageClassName: jiva-1r-raa - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-1r.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-1r.yaml deleted file mode 100644 index 64fcedbe97..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-1r.yaml +++ /dev/null @@ -1,24 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: jiva-1r - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-vol1-1r-claim -spec: - storageClassName: jiva-1r - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-beta-1r.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-beta-1r.yaml deleted file mode 100644 index 562cba0a29..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-beta-1r.yaml +++ /dev/null @@ -1,25 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: jiva-beta-sc - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-beta-sc-claim - annotations: - volume.beta.kubernetes.io/storage-class: jiva-beta-sc -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G - diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-disable-scrub.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-disable-scrub.yaml deleted file mode 100644 index 68c9dc52ed..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-disable-scrub.yaml +++ /dev/null @@ -1,14 +0,0 @@ ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-disable-scrub-claim -spec: - storageClassName: jiva-sjr-disabled - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- - diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-nodeselector.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-nodeselector.yaml deleted file mode 100644 index baa2af3cbe..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-nodeselector.yaml +++ /dev/null @@ -1,30 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-jiva-nodeselector - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" - - name: ReplicaNodeSelector - value: |- - nodetype: storage - - name: TargetNodeSelector - value: |- - nodetype: app -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-jiva-replica-pinned -spec: - storageClassName: openebs-jiva-nodeselector - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-openebs-jiva-default.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-openebs-jiva-default.yaml deleted file mode 100644 index f781909921..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-openebs-jiva-default.yaml +++ /dev/null @@ -1,14 +0,0 @@ ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-default-vol -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- - diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-raa-az.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-raa-az.yaml deleted file mode 100644 index f759b36cae..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-raa-az.yaml +++ /dev/null @@ -1,37 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: jiva-raa-az - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "3" - - name: ReplicaAntiAffinityTopoKey - value: failure-domain.beta.kubernetes.io/zone -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-vol1-raa-az-claim -spec: - storageClassName: jiva-raa-az - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-vol2-raa-az-claim -spec: - storageClassName: jiva-raa-az - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-raa.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-raa.yaml deleted file mode 100644 index 99514ce673..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-raa.yaml +++ /dev/null @@ -1,29 +0,0 @@ ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-vol1-raa-claim - labels: - openebs.io/replica-anti-affinity: application-name -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-vol2-raa-claim - labels: - openebs.io/replica-anti-affinity: application-name -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-standard-limit-resources.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-standard-limit-resources.yaml deleted file mode 100644 index b1fcb3c8bc..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-standard-limit-resources.yaml +++ /dev/null @@ -1,36 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-jiva-1r-limits - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" - - name: TargetResourceLimits - value: |- - memory: 1Gi - cpu: 100m - - name: AuxResourceLimits - value: |- - memory: 0.5Gi - cpu: 50m - - name: ReplicaResourceLimits - value: |- - memory: 2Gi - cpu: 200m -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-jiva-1r-limits-claim -spec: - storageClassName: openebs-jiva-1r-limits - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-standard-ns-default.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-standard-ns-default.yaml deleted file mode 100644 index 09535cbdd2..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-standard-ns-default.yaml +++ /dev/null @@ -1,12 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: jiva-sc-standard-ns-default -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/sample-pv-yamls/pvc-jiva-sc-standard-ns-test.yaml b/k8s/sample-pv-yamls/pvc-jiva-sc-standard-ns-test.yaml deleted file mode 100644 index b0b0b21a63..0000000000 --- a/k8s/sample-pv-yamls/pvc-jiva-sc-standard-ns-test.yaml +++ /dev/null @@ -1,19 +0,0 @@ ---- -apiVersion: v1 -kind: Namespace -metadata: - name: test ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - namespace: test - name: jiva-sc-standard-ns-test -spec: - storageClassName: openebs-jiva-default - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- diff --git a/k8s/sample-pv-yamls/pvc-sparse-claim-cstor.yaml b/k8s/sample-pv-yamls/pvc-sparse-claim-cstor.yaml deleted file mode 100644 index baf6ab2ee4..0000000000 --- a/k8s/sample-pv-yamls/pvc-sparse-claim-cstor.yaml +++ /dev/null @@ -1,34 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cstor-sparse-auto - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: StoragePoolClaim - value: "sparse-claim-auto" - - name: ReplicaCount - value: "1" - #- name: TargetResourceLimits - # value: |- - # memory: 1Gi - # cpu: 200m - #- name: AuxResourceLimits - # value: |- - # memory: 0.5Gi - # cpu: 50m -provisioner: openebs.io/provisioner-iscsi ---- -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: cstor-vol1-1r-claim -spec: - storageClassName: openebs-cstor-sparse-auto - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G - diff --git a/k8s/sample-pv-yamls/pvc-standard-cstor-disk.yaml b/k8s/sample-pv-yamls/pvc-standard-cstor-disk.yaml deleted file mode 100644 index f4a86dfb7b..0000000000 --- a/k8s/sample-pv-yamls/pvc-standard-cstor-disk.yaml +++ /dev/null @@ -1,12 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: demo-cstor-disk-vol1-claim -spec: - storageClassName: openebs-cstor-disk - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4G ---- diff --git a/k8s/sample-pv-yamls/sc-jiva-1r.yaml b/k8s/sample-pv-yamls/sc-jiva-1r.yaml deleted file mode 100644 index 6134de00b2..0000000000 --- a/k8s/sample-pv-yamls/sc-jiva-1r.yaml +++ /dev/null @@ -1,12 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: jiva-1r - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: ReplicaCount - value: "1" -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/sample-pv-yamls/sc-jiva-disable-scrub.yaml b/k8s/sample-pv-yamls/sc-jiva-disable-scrub.yaml deleted file mode 100644 index 7e680f2e4f..0000000000 --- a/k8s/sample-pv-yamls/sc-jiva-disable-scrub.yaml +++ /dev/null @@ -1,13 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: jiva-sjr-disabled - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: RetainReplicaData - enabled: true -provisioner: openebs.io/provisioner-iscsi ---- - diff --git a/k8s/sample-pv-yamls/sc-localpv-custom-hostpath.yaml b/k8s/sample-pv-yamls/sc-localpv-custom-hostpath.yaml deleted file mode 100644 index 5d5f2aec86..0000000000 --- a/k8s/sample-pv-yamls/sc-localpv-custom-hostpath.yaml +++ /dev/null @@ -1,23 +0,0 @@ ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-custom-hostpath - annotations: - #Define a new CAS Type called `local` - #which indicates that Data is stored - #directly onto hostpath. The hostpath can be: - #- device (as block or mounted path) - #- hostpath (sub directory on OS or mounted path) - openebs.io/cas-type: local - cas.openebs.io/config: | - #- name: StorageType - # value: "storage-device" - - name: StorageType - value: "hostpath" - - name: BasePath - value: "/var/openebs-hp" -provisioner: openebs.io/local -volumeBindingMode: WaitForFirstConsumer -reclaimPolicy: Delete ---- diff --git a/k8s/sample-pv-yamls/spc-cstor-disk-type.yaml b/k8s/sample-pv-yamls/spc-cstor-disk-type.yaml deleted file mode 100644 index 45c2434a38..0000000000 --- a/k8s/sample-pv-yamls/spc-cstor-disk-type.yaml +++ /dev/null @@ -1,23 +0,0 @@ ---- -apiVersion: openebs.io/v1alpha1 -kind: StoragePoolClaim -metadata: - name: cstor-disk -spec: - name: cstor-disk - type: disk - maxPools: 3 - poolSpec: - poolType: striped ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cstor-disk - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: StoragePoolClaim - value: "cstor-disk" -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/sample-pv-yamls/spc-cstor-sparse.yaml b/k8s/sample-pv-yamls/spc-cstor-sparse.yaml deleted file mode 100644 index 6f0fe6d28a..0000000000 --- a/k8s/sample-pv-yamls/spc-cstor-sparse.yaml +++ /dev/null @@ -1,36 +0,0 @@ ---- -apiVersion: openebs.io/v1alpha1 -kind: StoragePoolClaim -metadata: - name: sparse-claim-auto - annotations: - cas.openebs.io/config: | - - name: PoolResourceRequests - value: |- - memory: 1Gi - cpu: 100m - - name: PoolResourceLimits - value: |- - memory: 2Gi - - name: AuxResourceLimits - value: |- - memory: 0.5Gi - cpu: 50m -spec: - name: sparse-claim-auto - type: sparse - maxPools: 3 - poolSpec: - poolType: striped ---- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-cstor-sparse - annotations: - openebs.io/cas-type: cstor - cas.openebs.io/config: | - - name: StoragePoolClaim - value: "sparse-claim-auto" -provisioner: openebs.io/provisioner-iscsi ---- diff --git a/k8s/sample-pv-yamls/spc-sparse-single.yaml b/k8s/sample-pv-yamls/spc-sparse-single.yaml deleted file mode 100644 index 29c2654b9d..0000000000 --- a/k8s/sample-pv-yamls/spc-sparse-single.yaml +++ /dev/null @@ -1,15 +0,0 @@ ---- -apiVersion: openebs.io/v1alpha1 -kind: StoragePoolClaim -metadata: - name: sparse-claim-auto -spec: - name: sparse-claim-auto - type: sparse - maxPools: 1 - minPools: 1 - poolSpec: - poolType: striped - cacheFile: /var/openebs/sparse/sparse-claim-auto.cache - thickProvisioning: false - roThresholdLimit: 80 diff --git a/k8s/upgrades/0.4.0-0.5.0/README.md b/k8s/upgrades/0.4.0-0.5.0/README.md deleted file mode 100644 index 310b5e1ed2..0000000000 --- a/k8s/upgrades/0.4.0-0.5.0/README.md +++ /dev/null @@ -1,189 +0,0 @@ -# UPGRADE FROM OPENEBS 0.4.0 TO 0.5.0 - -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: The Jiva controller & replica pods* -- *All steps described in this document need to be performed on the Kubernetes master* -- *The same steps can be used to upgrade OpenEBS from 0.4.0 to 0.5.1* - -### STEP-1 : CORDON ALL NODES WHICH DO NOT HOST OPENEBS VOLUME REPLICAS - -Perform ```kubectl cordon ``` on all nodes that don't have the openebs volume replicas - -**Notes** : This is to ensure that the replicas are not rescheduled elsewhere(other nodes) upon upgrade and "stick" to the same -nodes. This is done to maintain data gravity, as we keep the data on the host disks and prefer to avoid multiple full-copies/sync -of the data on newer nodes.Subsequent releases will have logic to ensure the replicas come up on same nodes w/o having to ensure -the same manually. - -### STEP-2 : OBTAIN YAML SPECIFICATIONS FROM OPENEBS 0.5.0 RELEASE - -Obtain the specifications from https://github.com/openebs/openebs/releases/tag/v0.5.0 - -### STEP-3: UPGRADE TO THE 0.5.0 OPENEBS OPERATOR - -``` -test@Master:~$ kubectl apply -f k8s/openebs-operator.yaml -serviceaccount "openebs-maya-operator" configured -clusterrole "openebs-maya-operator" configured -clusterrolebinding "openebs-maya-operator" configured -deployment "maya-apiserver" configured -service "maya-apiserver-service" configured -deployment "openebs-provisioner" configured -customresourcedefinition "storagepoolclaims.openebs.io" created -customresourcedefinition "storagepools.openebs.io" created -storageclass "openebs-standard" created -``` - -**Notes** : This step will upgrade the operator deployments with the 0.5.0 images, and also : - -- Sets up the pre-requisites for volume monitoring -- Creates a new OpenEBS storage-class called openebs-standard with : vol-size=5G, storage-replica-count=2, storagepool=default, monitoring=True - -The above storage-class template can be used to create new ones with desired properties - -### STEP-4: CREATE THE OPENEBS MONITORING DEPLOYMENTS (Prometheus & Grafana) - -While this is an optional step, it is recommended to use the monitoring framework to track storage metrics on the OpenEBS -volume. - -``` -testk@Master:~$ kubectl apply -f k8s/openebs-monitoring-pg.yaml -configmap "openebs-prometheus-tunables" created -configmap "openebs-prometheus-config" created -deployment "openebs-prometheus" created -service "openebs-prometheus-service" created -service "openebs-grafana" created -deployment "openebs-grafana" created - -Verify that the monitoring pods are created & the operator pods are in running state. Together these constitute -the OpenEBS control plane in 0.5.0 - -test@Master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -maya-apiserver-2288016177-lzctj 1/1 Running 0 1m -openebs-grafana-2789105701-0rw6v 1/1 Running 0 14s -openebs-prometheus-4109589487-4bngb 1/1 Running 0 14s -openebs-provisioner-2835097941-5fcxh 1/1 Running 0 1m -percona-2503451898-5k9xw 1/1 Running 0 7m -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-ctrl-3477661062-t0pg9 1/1 Running 0 7m -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-rep-3163680705-4d7x2 1/1 Running 0 7m -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-rep-3163680705-lbgpc 1/1 Running 0 7m - -test@Master:~$ kubectl get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kubernetes 10.96.0.1 443/TCP 24h -maya-apiserver-service 10.102.159.226 5656/TCP 9m -openebs-grafana 10.101.147.181 3000:32515/TCP 45s -openebs-prometheus-service 10.106.180.138 80:32514/TCP 45s -percona-mysql 10.100.189.43 3306/TCP 7m -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-ctrl-svc 10.111.2.96 3260/TCP,9501/TCP 7m - -**Notes** : This also creates a default prometheus configmap which can be upgraded if needed. The prometheus -and grafana services are available on the node ports at ports 32514 & 32515 respectively - -``` -### STEP-5: UPDATE OPENEBS VOLUME (CONTROLLER AND REPLICA) DEPLOYMENTS - -Obtain the name of the OpenEBS PersistentVolume (PV) that has to be updated - -``` -test@Master:~$ kubectl get pv -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc 5G RWO Delete Bound default/demo-vol1-claim openebs-basic -``` - -Run the script oebs_update.sh by passing the PV as argument - -``` -test@Master:~$ ./oebs_update pvc-01174ced-0a40-11e8-be1c-000c298ff5fc -deployment "pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-rep" patched -deployment "pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-ctrl" patched -replicaset "pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-ctrl-59df76689f" deleted -``` -**Notes** : This script replaces the replica and controller patch files with the appropriate container names derived from the -PV and patches the volume deployments using the ```kubectl patch deployment``` command. -In each case, it verifies whether the new images have been rolled out successfully, using ```kubectl rollout status deployment``` -before proceeding to the next step. Post patching, it also deletes the orphaned replicaset of the controller deployment as a -workaround for this issue : https://github.com/openebs/openebs/issues/1201 - - -Verify that the volume controller and replica pods are running post upgrade - -``` -test@Master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -maya-apiserver-2288016177-lzctj 1/1 Running 0 3m -openebs-grafana-2789105701-0rw6v 1/1 Running 0 2m -openebs-prometheus-4109589487-4bngb 1/1 Running 0 2m -openebs-provisioner-2835097941-5fcxh 1/1 Running 0 3m -percona-2503451898-5k9xw 1/1 Running 0 9m -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-ctrl-6489864889-ml2zw 2/2 Running 0 10s -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-rep-6b9f46bc6b-4vjkf 1/1 Running 0 20s -pvc-8cc9c06c-ea22-11e7-9112-000c298ff5fc-rep-6b9f46bc6b-hvc8b 1/1 Running 0 20s -``` - -### STEP-6: VERIFY THAT ALL THE REPLICAS ARE REGISTERED AND ARE IN RW MODE - -Execute the following REST query by providing the controller pod IP or service IP to obtain replica status - -**Notes** : - -- Get the pod/service IP by kubectl get pods -o wide/kubectl get svc respectively -- Install jq package on the kubernetes node/master where the following command is executed - -``` -test@Master:~$ curl GET http://10.47.0.5:9501/v1/replicas | grep createTypes | jq - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed -100 162 100 162 0 0 27 0 0:00:06 0:00:05 0:00:01 37 -100 971 100 971 0 0 419k 0 --:--:-- --:--:-- --:--:-- 419k -{ - "createTypes": { - "replica": "http://10.47.0.5:9501/v1/replicas" - }, - "data": [ - { - "actions": { - "preparerebuild": "http://10.47.0.5:9501/v1/replicas/dGNwOi8vMTAuNDcuMC4zOjk1MDI=?action=preparerebuild", - "verifyrebuild": "http://10.47.0.5:9501/v1/replicas/dGNwOi8vMTAuNDcuMC4zOjk1MDI=?action=verifyrebuild" - }, - "address": "tcp://10.47.0.3:9502", - "id": "dGNwOi8vMTAuNDcuMC4zOjk1MDI=", - "links": { - "self": "http://10.47.0.5:9501/v1/replicas/dGNwOi8vMTAuNDcuMC4zOjk1MDI=" - }, - "mode": "RW", - "type": "replica" - }, - { - "actions": { - "preparerebuild": "http://10.47.0.5:9501/v1/replicas/dGNwOi8vMTAuNDQuMC41Ojk1MDI=?action=preparerebuild", - "verifyrebuild": "http://10.47.0.5:9501/v1/replicas/dGNwOi8vMTAuNDQuMC41Ojk1MDI=?action=verifyrebuild" - }, - "address": "tcp://10.44.0.5:9502", - "id": "dGNwOi8vMTAuNDQuMC41Ojk1MDI=", - "links": { - "self": "http://10.47.0.5:9501/v1/replicas/dGNwOi8vMTAuNDQuMC41Ojk1MDI=" - }, - "mode": "RW", - "type": "replica" - } - ], - "links": { - "self": "http://10.47.0.5:9501/v1/replicas" - }, - "resourceType": "replica", - "type": "collection" -} -``` - -### STEP-7: CONFIGURE GRAFANA TO MONITOR VOLUME METRICS - -Perform the following actions if Step-4 was executed. - -- Access the grafana dashboard at `http://*NodeIP*:32515` -- Add the prometheus data source by giving URL as `http://*NodeIP*:32514` -- Once data source is validated, import the dashboard JSON from : - https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-pg-dashboard.json -- Access the volume stats by selecting the volume name (pvc-*) in the OpenEBS Volume dashboard - -**Note** : For new applications select a newly created storage-class that has monitoring enabled to automatically start viewing metrics diff --git a/k8s/upgrades/0.4.0-0.5.0/controller.patch.tpl.yml b/k8s/upgrades/0.4.0-0.5.0/controller.patch.tpl.yml deleted file mode 100644 index 9eaebed0bd..0000000000 --- a/k8s/upgrades/0.4.0-0.5.0/controller.patch.tpl.yml +++ /dev/null @@ -1,44 +0,0 @@ -{ - "spec": { - "selector": { - "matchLabels": { - "monitoring": "volume_exporter_prometheus" - } - }, - "template": { - "metadata": { - "labels": { - "monitoring": "volume_exporter_prometheus" - } - }, - "spec": { - "containers":[ - { - "name": "pvc--ctrl-con", - "image": "openebs/jiva:0.5.0" - }, - { - "args": [ - "-c=http://127.0.0.1:9501" - ], - "command": [ - "maya-volume-exporter" - ], - "image": "openebs/m-exporter:0.5.0", - "imagePullPolicy": "IfNotPresent", - "name": "maya-volume-exporter", - "ports": [ - { - "containerPort": 9500, - "protocol": "TCP" - } - ], - "resources": {}, - "terminationMessagePath": "/dev/termination-log", - "terminationMessagePolicy": "File" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.4.0-0.5.0/oebs_update.sh b/k8s/upgrades/0.4.0-0.5.0/oebs_update.sh deleted file mode 100755 index 0d19174460..0000000000 --- a/k8s/upgrades/0.4.0-0.5.0/oebs_update.sh +++ /dev/null @@ -1,68 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -pv=$1 - -################################################################ -# STEP: Generate deploy, replicaset and container names from PV# -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc", # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -# ctrl-cont: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl-con # -################################################################ - -c_dep=$(echo $pv-ctrl); c_name=$(echo $c_dep-con) -r_dep=$(echo $pv-rep); r_name=$(echo $r_dep-con) - -c_rs=$(kubectl get rs -o name | grep $c_dep | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate container names # -# # -# NOTES: Placeholder "pvc--ctrl/rep-con in the # -# patch files are replaced with container names derived from # -# the PV in the previous step # -################################################################ - -sed -i "s/pvc[^ \"]*/$r_name/g" replica.patch.tpl.yml -sed -i "s/pvc[^ \"]*/$c_name/g" controller.patch.tpl.yml - -################################################################ -# STEP: Patch OpenEBS volume deployments (controller, replica) # -# # -# NOTES: Strategic merge patch is used to update the volume w/ # -# rollout status verification # -################################################################ - -# PATCH JIVA REPLICA DEPLOYMENT #### -kubectl patch deployment $r_dep -p "$(cat replica.patch.tpl.yml)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -rollout_status=$(kubectl rollout status deployment/$r_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -#### PATCH CONTROLLER DEPLOYMENT #### -kubectl patch deployment $c_dep -p "$(cat controller.patch.tpl.yml)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -rollout_status=$(kubectl rollout status deployment/$c_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -################################################################ -# STEP: Remove Stale Controller Replicaset # -# # -# NOTES: This step is applicable upon label selector updates, # -# where the deployment creates orphaned replicasets # -################################################################ -kubectl delete rs $c_rs - - - diff --git a/k8s/upgrades/0.4.0-0.5.0/replica.patch.tpl.yml b/k8s/upgrades/0.4.0-0.5.0/replica.patch.tpl.yml deleted file mode 100644 index 487246e3a6..0000000000 --- a/k8s/upgrades/0.4.0-0.5.0/replica.patch.tpl.yml +++ /dev/null @@ -1,14 +0,0 @@ -{ - "spec": { - "template": { - "spec": { - "containers":[ - { - "name": "pvc--rep-con", - "image": "openebs/jiva:0.5.0" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.5.0-0.5.1/README.md b/k8s/upgrades/0.5.0-0.5.1/README.md deleted file mode 100644 index 15cc126548..0000000000 --- a/k8s/upgrades/0.5.0-0.5.1/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# UPGRADE FROM OPENEBS 0.5.0 TO 0.5.1 - -Follow the steps suggested in [README](https://github.com/ksatchit/openebs/blob/master/k8s/upgrades/0.4.0-0.5.0/README.md) -for upgrading OpenEBS from 0.4.0 to 0.5.0 with the following minor changes: - -- Step #2 : Obtain the specifications from https://github.com/openebs/openebs/releases/tag/v0.5.1 - -- Step #4 : Monitoring is supported from 0.5.0 onwards. These pods may be created on a need basis if not running already - -- Step #5 : The script oebs_update.sh to update the volume deployments should be: - - - Copied into the 0.5.0-0.5.1 folder OR updated with relative paths to point to the appropriate patch files - - Updated by commenting the step to delete stale controller replicaset as this issue is N/A while upgrading from 0.5.0 to 0.5.1 - -- Step #7 : Refer Step #4 - - - - diff --git a/k8s/upgrades/0.5.0-0.5.1/controller.patch.tpl.yml b/k8s/upgrades/0.5.0-0.5.1/controller.patch.tpl.yml deleted file mode 100644 index b975d631a7..0000000000 --- a/k8s/upgrades/0.5.0-0.5.1/controller.patch.tpl.yml +++ /dev/null @@ -1,14 +0,0 @@ -{ - "spec": { - "template": { - "spec": { - "containers":[ - { - "name": "pvc--ctrl-con", - "image": "openebs/jiva:0.5.1" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.5.0-0.5.1/replica.patch.tpl.yml b/k8s/upgrades/0.5.0-0.5.1/replica.patch.tpl.yml deleted file mode 100644 index 4f86966cd3..0000000000 --- a/k8s/upgrades/0.5.0-0.5.1/replica.patch.tpl.yml +++ /dev/null @@ -1,14 +0,0 @@ -{ - "spec": { - "template": { - "spec": { - "containers":[ - { - "name": "pvc--rep-con", - "image": "openebs/jiva:0.5.1" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.5.x-0.6.0/README.md b/k8s/upgrades/0.5.x-0.6.0/README.md deleted file mode 100644 index c0ea08f2c0..0000000000 --- a/k8s/upgrades/0.5.x-0.6.0/README.md +++ /dev/null @@ -1,103 +0,0 @@ -# UPGRADE FROM OPENEBS 0.5.3+ TO 0.6.0 - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.5.3 or 0.5.4 to 0.6.0. The upgrade of OpenEBS is a two step process. -- *Step 1* - Upgrade the OpenEBS Operator -- *Step 2* - Upgrade the OpenEBS Volumes that were created with older OpenEBS Operator (0.5.3 or 0.5.4) - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: The Jiva controller & replica pods* -- *All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -## Step 1: Upgrade the OpenEBS Operator - -OpenEBS installation is very flexible and highly configurable. It can be installed using the default openebs-operator.yaml file with default settings or via customized openebs-operator.yaml. One of the key features or flexibility added in openebs 0.6 is to have the option of selecting the nodes on which replica's will be installed. To enable this feature, you will need to label the nodes using `kubectl label nodes ...` and then customize the default openebs-operator.yaml to include the label in the `REPLICA_NODE_SELECTOR_LABEL`. Note that, this feature of node-selector will help if you have a K8s cluster of more than 3 nodes and you would like to restrict the volume replicas to a subset of 3 nodes. - -Upgrade steps for OpenEBS Operator depend on the way OpenEBS was installed. Depending on the way OpenEBS was installed, select one of the following: - -### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values** - -``` -#Delete older operator and storage classes. With OpenEBS 0.6, all the components are installed in namespace `openebs` -# as opposed to `default` namespace in earlier releases. Before upgrading to 0.6, delete the older version and -# then apply the newer versions. -kubectl delete -f https://raw.githubusercontent.com/openebs/openebs/v0.5/k8s/openebs-operator.yaml -kubectl delete -f https://raw.githubusercontent.com/openebs/openebs/v0.5/k8s/openebs-storageclasses.yaml -#Wait for objects to be delete, you can check using `kubectl get deploy` - -#Install the 0.6 operator and storage classes. -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-operator.yaml -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-storageclasses.yaml -``` - -### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -- Run `helm ls` to get the release name of openebs. -- Upgrade using `helm upgrade -f https://openebs.github.io/charts/helm-values-0.6.0.yaml stable/openebs` - -### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.6 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.6: -- stable/openebs [PR#6768](https://github.com/helm/charts/pull/6768) or -- openebs-charts [PR#1646](https://github.com/openebs/openebs/pull/1646) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 2: Upgrade the OpenEBS Volumes - -Even after the OpenEBS Operator has been upgraded to 0.6, the volumes will continue to work with 0.5.3 or 0.5.4. Each of the volumes should be upgraded (one at a time) to 0.6, using the steps provided below. - -*Note: There has been a change in the way OpenEBS Controller Pods communicate with the Replica Pods. So, it is recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -In 0.5.x releases, when a replica is shutdown, it will get rescheduled to another available node in the cluster and start copying the data from the other replicas. This is not a desired behaviour during upgrades, which will create new replica's as part of the rolling-upgrade. To pin the replicas or force them to the nodes where the data is already present, starting with 0.6 - we use the concept of nodeSelector and Tolerations that will make sure replica's are not moved on node or pod delete operations. - -So as part of upgrade, we recommend that you label the nodes where the replica pods are scheduled as follows: -``` -kubectl label nodes gke-kmova-helm-default-pool-d8b227cc-6wqr "openebs-pv"="openebs-storage" -``` -Note that the key `openebs-pv` is fixed, however you can use any value in place of `openebs-storage`. This value will be taken as a parameters in the upgrade script below. - -Repeat the above step of labellilng the node for all the nodes where replica's are scheduled. The assumption is that all the PV replica's are scheduled on the same set of 3 nodes. - -Limitations: -- need to handle cases where there are a mix of PVs with 1 and 3 replicas or -- scenario like PV1 replicas are on nodes - n1, n2, n3, where as PV2 replicas are on nodes - n2, n3, n4 -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/readonlyvolumes.html -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - -### Download the upgrade scripts - -Either `git clone` or download the following files to your work directory. -https://github.com/openebs/openebs/tree/master/k8s/upgrades/0.5.x-0.6.0 -- `patch-strategy-recreate.json` -- `replica.patch.tpl.yml` -- `controller.patch.tpl.yml` -- `oebs_update.sh` - -### Select the PV that needs to be upgraded. - -``` -kubectl get pv -``` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -### Upgrade the PV that needs to be upgraded. - -``` -./oebs_update.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 openebs-storage -``` - diff --git a/k8s/upgrades/0.5.x-0.6.0/controller.patch.tpl.json b/k8s/upgrades/0.5.x-0.6.0/controller.patch.tpl.json deleted file mode 100644 index 6456da5484..0000000000 --- a/k8s/upgrades/0.5.x-0.6.0/controller.patch.tpl.json +++ /dev/null @@ -1,71 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs/volume-provisioner": "jiva", - "pvc": "@pvc-name" - } - }, - "spec": { - "selector": { - "matchLabels": { - "openebs/volume-provisioner": "jiva", - "pvc": "@pvc-name" - } - }, - "template": { - "metadata": { - "labels": { - "openebs/volume-provisioner": "jiva", - "pvc": "@pvc-name" - } - }, - "spec": { - "containers":[ - { - "name": "@c_name", - "image": "openebs/jiva:0.6.0", - "env":[ - { - "name": "REPLICATION_FACTOR", - "value": "@rep_count" - } - ] - }, - { - "name": "maya-volume-exporter", - "command": [ - "maya-exporter" - ], - "image": "openebs/m-exporter:0.6.0" - } - ], - "tolerations": [ - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/notReady", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/unreachable", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/not-ready", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unreachable", - "operator": "Exists", - "tolerationSeconds": 0 - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.5.x-0.6.0/oebs_update.sh b/k8s/upgrades/0.5.x-0.6.0/oebs_update.sh deleted file mode 100755 index de112d4db1..0000000000 --- a/k8s/upgrades/0.5.x-0.6.0/oebs_update.sh +++ /dev/null @@ -1,139 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -if [ "$#" -ne 2 ]; then - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - echo " Label applied to the nodes where replicas of" - echo " this PV are present. Get the nodes by running:" - echo " kubectl get pods --all-namespaces -o wide | grep " - exit 1 -fi - -pv=$1 -replica_node_label=$2 - -pvc=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` -ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` - -################################################################ -# STEP: Generate deploy, replicaset and container names from PV# -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc", # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -# ctrl-cont: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl-con # -################################################################ - -c_dep=$(echo $pv-ctrl); c_name=$(echo $c_dep-con) -r_dep=$(echo $pv-rep); r_name=$(echo $r_dep-con) - -# Get the number of replicas configured. -# This field is currently not used, but can add additional validations -# based on the nodes and expected number of replicas -rep_count=`kubectl get deploy $r_dep --namespace $ns -o jsonpath="{.spec.replicas}"` - -# Get the list of nodes where replica pods are running, delimited by ':' -rep_nodenames=`kubectl get pods -n $ns $rep_labels \ - -l "vsm=$pv" -l "openebs/replica=jiva-replica" \ - -o jsonpath="{range .items[*]}{@.spec.nodeName}:{end}"` - -echo "Checking if the node with replica pod has been labeled with $replica_node_label" -for rep_node in `echo $rep_nodenames | tr ":" " "`; do - nl="";nl=`kubectl get nodes $rep_node -o jsonpath="{.metadata.labels.openebs-pv-$pv}"` - if [ -z "$nl" ]; - then - echo "Labeling $rep_node"; - kubectl label node $rep_node "openebs-pv-${pv}=$replica_node_label" - fi -done - - -echo "Patching Replica Deployment upgrade strategy as recreate" -kubectl patch deployment --namespace $ns --type json $r_dep -p "$(cat patch-strategy-recreate.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -echo "Patching Controller Deployment upgrade strategy as recreate" -kubectl patch deployment --namespace $ns --type json $c_dep -p "$(cat patch-strategy-recreate.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -# Fetch the older controller and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. -c_rs=$(kubectl get rs -o name --namespace $ns | grep $c_dep | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns | grep $r_dep | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate container names # -# # -# NOTES: Placeholder "pvc--ctrl/rep-con in the # -# patch files are replaced with container names derived from # -# the PV in the previous step # -################################################################ - -sed "s/@pvc-name[^ \"]*/$pvc/g" replica.patch.tpl.json > replica.patch.tpl.json.0 -sed "s/@replica_node_label[^ \"]*/$replica_node_label/g" replica.patch.tpl.json.0 > replica.patch.tpl.json.1 -sed "s/@pv-name[^ \"]*/$pv/g" replica.patch.tpl.json.1 > replica.patch.tpl.json.2 -sed "s/@r_name[^ \"]*/$r_name/g" replica.patch.tpl.json.2 > replica.patch.json - -sed "s/@pvc-name[^ \"]*/$pvc/g" controller.patch.tpl.json > controller.patch.tpl.json.0 -sed "s/@c_name[^ \"]*/$c_name/g" controller.patch.tpl.json.0 > controller.patch.tpl.json.1 -sed "s/@rep_count[^ \"]*/$rep_count/g" controller.patch.tpl.json.1 > controller.patch.json - -################################################################ -# STEP: Patch OpenEBS volume deployments (controller, replica) # -# # -# NOTES: Strategic merge patch is used to update the volume w/ # -# rollout status verification # -################################################################ - -# PATCH JIVA REPLICA DEPLOYMENT #### -echo "Upgrading Replica Deployment to 0.6" -kubectl patch deployment --namespace $ns $r_dep -p "$(cat replica.patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $r_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -#### PATCH CONTROLLER DEPLOYMENT #### -echo "Upgrading Controller Deployment to 0.6" -kubectl patch deployment --namespace $ns $c_dep -p "$(cat controller.patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $c_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -################################################################ -# STEP: Remove Stale Controller Replicaset # -# # -# NOTES: This step is applicable upon label selector updates, # -# where the deployment creates orphaned replicasets # -################################################################ - -echo "Clearing temporary files" -rm replica.patch.tpl.json.0 -rm replica.patch.tpl.json.1 -rm replica.patch.tpl.json.2 -rm replica.patch.json -rm controller.patch.tpl.json.0 -rm controller.patch.tpl.json.1 -rm controller.patch.json - -echo "Successfully upgraded $pv to 0.6. Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.5.x-0.6.0/patch-strategy-recreate.json b/k8s/upgrades/0.5.x-0.6.0/patch-strategy-recreate.json deleted file mode 100644 index 8c6c5c60af..0000000000 --- a/k8s/upgrades/0.5.x-0.6.0/patch-strategy-recreate.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/strategy/rollingUpdate" }, - { "op": "replace", "path": "/spec/strategy/type", "value": "Recreate" } -] diff --git a/k8s/upgrades/0.5.x-0.6.0/replica.patch.tpl.json b/k8s/upgrades/0.5.x-0.6.0/replica.patch.tpl.json deleted file mode 100644 index 843f1526dc..0000000000 --- a/k8s/upgrades/0.5.x-0.6.0/replica.patch.tpl.json +++ /dev/null @@ -1,87 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs/volume-provisioner": "jiva", - "pvc": "@pvc-name" - } - }, - "spec": { - "selector": { - "matchLabels": { - "openebs/volume-provisioner": "jiva", - "pvc": "@pvc-name" - } - }, - "template": { - "metadata": { - "labels": { - "openebs/volume-provisioner": "jiva", - "pvc": "@pvc-name" - } - }, - "spec": { - "containers":[ - { - "name": "@r_name", - "image": "openebs/jiva:0.6.0" - } - ], - "nodeSelector": { - "openebs-pv-@pv-name": "@replica_node_label" - }, - "tolerations": [ - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/notReady", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/unreachable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/not-ready", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unreachable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/out-of-disk", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/memory-pressure", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/disk-pressure", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/network-unavailable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unschedulable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.cloudprovider.kubernetes.io/uninitialized", - "operator": "Exists" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.6.0-0.7.0/README.md b/k8s/upgrades/0.6.0-0.7.0/README.md deleted file mode 100644 index d05a81308f..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/README.md +++ /dev/null @@ -1,165 +0,0 @@ -# UPGRADE FROM OPENEBS 0.6.0 TO 0.7.x - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.6.0 to 0.7.x - -The upgrade of OpenEBS is a two step process: -- *Step 1* - Upgrade the OpenEBS Operator -- *Step 2* - Upgrade the OpenEBS Volumes from previous versions (0.6.0, 0.5.x) - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: The Jiva controller(aka target) & replica pods* - -## Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -### Download the upgrade scripts - -You can either `git clone` or download the upgrade scripts. - -``` -mkdir upgrade-openebs -cd upgrade-openebs -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/upgrade/0.6.0-0.7.0/ -``` - -Or - -Download the following files to your work directory from https://github.com/openebs/openebs/tree/master/k8s/upgrades/0.6.0-0.7.0 -- `patch-strategy-recreate.json` -- `jiva-replica-patch.tpl.json` -- `jiva-target-patch.tpl.json` -- `jiva-target-svc-patch.tpl.json` -- `target-patch-remove-labels.json` -- `target-svc-patch-remove-labels.json` -- `replica-patch-remove-labels.json` -- `sc.patch.tpl.yaml` -- `upgrade_sc.sh` -- `oebs_update.sh` -- `pre_upgrade.sh` - -### Breaking Changes in 0.7.x - -#### Default Jiva Storage Pool -OpenEBS 0.7.0 auto installs a default Jiva Storage Pool and a default Storage Class named `default` and `openebs-jiva-default` respectively. If you have a storage pool named `default` created in earlier version, you will have to re-apply your Storage Pool after the upgrade is completed. - -Before upgrading the OpenEBS Operator, check if you are using a storage pool named `default` which will conflict with default jiva pool installed with OpenEBS 0.7.0: -``` -./pre_upgrade.sh -``` - -#### Storage Classes -OpenEBS supports specified Storage Policies in Storage Classes. The way storage policies are specified has changed in 0.7.x. The policies will have to be specified under metadata instead of parameters. - -For example, if your storage class looks like this in 0.6.0: -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-mongodb -provisioner: openebs.io/provisioner-iscsi -parameters: - openebs.io/storage-pool: "default" - openebs.io/jiva-replica-count: "3" - openebs.io/volume-monitor: "true" - openebs.io/capacity: 5G - openebs.io/fstype: "xfs" -``` - -There is no need to mention the volume-monitor and capacity with 0.7.0. The remaining policies like storage pool, replica count and the fstype should be specified as follows: -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-mongodb - annotations: - cas.openebs.io/config: | - - name: ReplicaCount - value: "3" - - name: StoragePool - value: default - - name: FSType - value: "xfs" -provisioner: openebs.io/provisioner-iscsi -``` - -Make edits to your Storage Class YAMLs - delete them and add them back. A delete and re-apply is required since updates to Storage Class parameters are not possible. - -If you are using `ext4` for FSType, you could use the following script to upgrade your StorageClasses. -``` -./upgrade_sc.sh -``` - -Alternatively, you can skip this step and re-apply your StorageClasses as per the 0.7.0 volume policy specification. - -**Important Note: StorageClasses have to updated prior to provisioning any new volumes with 0.7.0.** - -## Step 1: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments - -The upgrade steps vary depending on the way OpenEBS was installed, select one of the following: - -#### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you will have to download the 0.7.0 openebs-operator.yaml and customize it again** - -``` -#If Upgrading from 0.5.x, delete older operator. -# Starting with OpenEBS 0.6, all the components are installed in namespace `openebs` -# as opposed to `default` namespace in earlier releases. -kubectl delete -f https://raw.githubusercontent.com/openebs/openebs/v0.5/k8s/openebs-operator.yaml -#Wait for objects to be delete, you can check using `kubectl get deploy` - -#Upgrade to 0.7 OpenEBS Operator -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.1.yaml -``` - -#### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -- Run `helm ls` to get the release name of openebs. -- Upgrade using `helm upgrade -f https://openebs.github.io/charts/helm-values-0.7.1.yaml stable/openebs` - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.7 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.7: -- openebs-charts [PR#1878](https://github.com/openebs/openebs/pull/1878) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 2: Upgrade the OpenEBS Volumes - -Even after the OpenEBS Operator has been upgraded to 0.7, the volumes will continue to work with older versions. Each of the volumes should be upgraded (one at a time) to 0.7, using the steps provided below. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/readonlyvolumes.html -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -``` -kubectl get pv -``` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -### Upgrade the PV that needs to be upgraded. - -``` -./oebs_update.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 openebs-storage -``` - diff --git a/k8s/upgrades/0.6.0-0.7.0/jiva-replica-patch.tpl.json b/k8s/upgrades/0.6.0-0.7.0/jiva-replica-patch.tpl.json deleted file mode 100644 index 734c8fd7b3..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/jiva-replica-patch.tpl.json +++ /dev/null @@ -1,113 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/capacity": "TODO", - "openebs.io/storage-pool": "TODO" - }, - "labels": { - "openebs.io/cas-type": "jiva", - "openebs.io/persistent-volume": "@pv-name", - "openebs.io/persistent-volume-claim": "@pvc-name", - "openebs.io/replica": "jiva-replica", - "openebs.io/storage-engine-type": "jiva", - "vsm": "deprecated" - } - }, - "spec": { - "selector": { - "matchLabels": { - "openebs.io/persistent-volume": "@pv-name", - "openebs.io/replica": "jiva-replica", - "vsm": "deprecated" - } - }, - "template": { - "metadata": { - "labels": { - "openebs.io/persistent-volume": "@pv-name", - "openebs.io/persistent-volume-claim": "@pvc-name", - "openebs.io/replica": "jiva-replica", - "vsm": "deprecated" - } - }, - "spec": { - "containers":[ - { - "name": "@r_name", - "image": "quay.io/openebs/jiva:0.7.2" - } - ], - "nodeSelector": { - "openebs-pv-@pv-name": "@replica_node_label" - }, - "affinity": { - "podAntiAffinity": { - "requiredDuringSchedulingIgnoredDuringExecution" : [ - { - "labelSelector": { - "matchLabels": { - "openebs.io/replica": "jiva-replica", - "openebs.io/persistent-volume": "@pv-name" - } - }, - "topologyKey": "kubernetes.io/hostname" - } - ] - } - }, - "tolerations": [ - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/notReady", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/unreachable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/not-ready", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unreachable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/out-of-disk", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/memory-pressure", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/disk-pressure", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/network-unavailable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unschedulable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.cloudprovider.kubernetes.io/uninitialized", - "operator": "Exists" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.6.0-0.7.0/jiva-target-patch.tpl.json b/k8s/upgrades/0.6.0-0.7.0/jiva-target-patch.tpl.json deleted file mode 100644 index 55b48b1305..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/jiva-target-patch.tpl.json +++ /dev/null @@ -1,84 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/fs-type": "ext4", - "openebs.io/lun": "0", - "openebs.io/volume-monitor": "true", - "openebs.io/volume-type": "jiva" - }, - "labels": { - "openebs.io/cas-type": "jiva", - "openebs.io/storage-engine-type": "jiva", - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name", - "openebs.io/persistent-volume-claim": "@pvc-name", - "vsm": "deprecated" - } - }, - "spec": { - "selector": { - "matchLabels": { - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name", - "vsm": "deprecated" - } - }, - "template": { - "metadata": { - "labels": { - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name", - "openebs.io/persistent-volume-claim": "@pvc-name", - "vsm": "deprecated" - } - }, - "spec": { - "containers":[ - { - "name": "@c_name", - "image": "quay.io/openebs/jiva:0.7.2", - "env":[ - { - "name": "REPLICATION_FACTOR", - "value": "@rep_count" - } - ] - }, - { - "name": "maya-volume-exporter", - "command": [ - "maya-exporter" - ], - "image": "quay.io/openebs/m-exporter:0.7.2" - } - ], - "tolerations": [ - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/notReady", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/unreachable", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/not-ready", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unreachable", - "operator": "Exists", - "tolerationSeconds": 0 - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.6.0-0.7.0/jiva-target-svc-patch.tpl.json b/k8s/upgrades/0.6.0-0.7.0/jiva-target-svc-patch.tpl.json deleted file mode 100644 index ab13503506..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/jiva-target-svc-patch.tpl.json +++ /dev/null @@ -1,18 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/cas-type": "jiva", - "openebs.io/storage-engine-type": "jiva", - "openebs.io/controller-service": "jiva-controller-svc", - "openebs.io/persistent-volume": "@pv-name", - "openebs.io/persistent-volume-claim": "@pvc-name" - } - }, - "spec": { - "selector": { - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name", - "vsm": "deprecated" - } - } -} diff --git a/k8s/upgrades/0.6.0-0.7.0/oebs_update.sh b/k8s/upgrades/0.6.0-0.7.0/oebs_update.sh deleted file mode 100755 index 5f6077d584..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/oebs_update.sh +++ /dev/null @@ -1,197 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -function setDeploymentRecreateStrategy() { - ns=$1 - dn=$2 - currStrategy=`kubectl get deploy -n $ns $dn -o jsonpath="{.spec.strategy.type}"` - - if [ $currStrategy = "RollingUpdate" ]; then - kubectl patch deployment --namespace $ns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 -replica_node_label="openebs-jiva" - -pvc=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` -ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` - -################################################################ -# STEP: Generate deploy, replicaset and container names from PV# -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc", # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -# ctrl-cont: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl-con # -################################################################ - -c_dep=$(echo $pv-ctrl); c_name=$(echo $c_dep-con) -r_dep=$(echo $pv-rep); r_name=$(echo $r_dep-con) -c_svc=$(echo $c_dep-svc) - -# Get the number of replicas configured. -# This field is currently not used, but can add additional validations -# based on the nodes and expected number of replicas -rep_count=`kubectl get deploy $r_dep --namespace $ns -o jsonpath="{.spec.replicas}"` - -# Get the list of nodes where replica pods are running, delimited by ':' -rep_nodenames=`kubectl get pods -n $ns $rep_labels \ - -l "vsm=$pv" -l "openebs/replica=jiva-replica" \ - -o jsonpath="{range .items[*]}{@.spec.nodeName}:{end}"` - -echo "Checking if the node with replica pod has been labeled with $replica_node_label" -for rep_node in `echo $rep_nodenames | tr ":" " "`; do - nl="";nl=`kubectl get nodes $rep_node -o jsonpath="{.metadata.labels.openebs-pv-$pv}"` - if [ -z "$nl" ]; - then - echo "Labeling $rep_node"; - kubectl label node $rep_node "openebs-pv-${pv}=$replica_node_label" - fi -done - - -echo "Patching Replica Deployment upgrade strategy as recreate" -setDeploymentRecreateStrategy $ns $r_dep - -echo "Patching Target Deployment upgrade strategy as recreate" -setDeploymentRecreateStrategy $ns $c_dep - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. -c_rs=$(kubectl get rs -o name --namespace $ns | grep $c_dep | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns | grep $r_dep | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate container names # -# # -# NOTES: Placeholder "pvc--ctrl/rep-con in the # -# patch files are replaced with container names derived from # -# the PV in the previous step # -################################################################ - -sed "s/@pvc-name[^ \"]*/$pvc/g" jiva-replica-patch.tpl.json > jiva-replica-patch.tpl.json.0 -sed "s/@replica_node_label[^ \"]*/$replica_node_label/g" jiva-replica-patch.tpl.json.0 > jiva-replica-patch.tpl.json.1 -sed "s/@pv-name[^ \"]*/$pv/g" jiva-replica-patch.tpl.json.1 > jiva-replica-patch.tpl.json.2 -sed "s/@r_name[^ \"]*/$r_name/g" jiva-replica-patch.tpl.json.2 > jiva-replica-patch.json - -sed "s/@pvc-name[^ \"]*/$pvc/g" jiva-target-patch.tpl.json > jiva-target-patch.tpl.json.0 -sed "s/@c_name[^ \"]*/$c_name/g" jiva-target-patch.tpl.json.0 > jiva-target-patch.tpl.json.1 -sed "s/@pv-name[^ \"]*/$pv/g" jiva-target-patch.tpl.json.1 > jiva-target-patch.tpl.json.2 -sed "s/@rep_count[^ \"]*/$rep_count/g" jiva-target-patch.tpl.json.2 > jiva-target-patch.json - -sed "s/@pvc-name[^ \"]*/$pvc/g" jiva-target-svc-patch.tpl.json > jiva-target-svc-patch.tpl.json.0 -sed "s/@pv-name[^ \"]*/$pv/g" jiva-target-svc-patch.tpl.json.0 > jiva-target-svc-patch.json - -################################################################ -# STEP: Patch OpenEBS volume deployments (jiva-target, jiva-replica) # -# # -# NOTES: Strategic merge patch is used to update the volume w/ # -# rollout status verification # -################################################################ - -# PATCH JIVA REPLICA DEPLOYMENT #### -echo "Upgrading Replica Deployment to 0.7" -kubectl patch deployment --namespace $ns $r_dep -p "$(cat jiva-replica-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $r_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -#### PATCH TARGET DEPLOYMENT #### -echo "Upgrading Target Deployment to 0.7" -kubectl patch deployment --namespace $ns $c_dep -p "$(cat jiva-target-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $c_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -#### PATCH TARGET SERVICE #### -echo "Upgrading Target Service to 0.7" -kubectl patch service --namespace $ns $c_svc -p "$(cat jiva-target-svc-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -### REMOVE DEPRECATED LABELS - -kubectl patch service --namespace $ns $c_svc --type json -p "$(cat target-svc-patch-remove-labels.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi -kubectl label svc --namespace $ns $c_svc "vsm-" -kubectl label svc --namespace $ns $c_svc "openebs/controller-service-" - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. -c_rs=$(kubectl get rs -o name --namespace $ns | grep $c_dep | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns | grep $r_dep | cut -d '/' -f 2) - - -# PATCH JIVA REPLICA DEPLOYMENT #### -echo "Remove deprecated labels from Replica Deployment" -kubectl patch deployment --namespace $ns $r_dep --type json -p "$(cat replica-patch-remove-labels.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $r_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -#### PATCH TARGET DEPLOYMENT #### -echo "Remove deprecated labels from Controller Deployment" -kubectl patch deployment --namespace $ns $c_dep --type json -p "$(cat target-patch-remove-labels.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $c_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -kubectl annotate pv $pv openebs.io/cas-type=jiva - -echo "Clearing temporary files" -rm jiva-replica-patch.tpl.json.0 -rm jiva-replica-patch.tpl.json.1 -rm jiva-replica-patch.tpl.json.2 -rm jiva-replica-patch.json -rm jiva-target-patch.tpl.json.0 -rm jiva-target-patch.tpl.json.1 -rm jiva-target-patch.tpl.json.2 -rm jiva-target-patch.json -rm jiva-target-svc-patch.tpl.json.0 -rm jiva-target-svc-patch.json - -echo "Successfully upgraded $pv to 0.7. Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.6.0-0.7.0/patch-strategy-recreate.json b/k8s/upgrades/0.6.0-0.7.0/patch-strategy-recreate.json deleted file mode 100644 index 8c6c5c60af..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/patch-strategy-recreate.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/strategy/rollingUpdate" }, - { "op": "replace", "path": "/spec/strategy/type", "value": "Recreate" } -] diff --git a/k8s/upgrades/0.6.0-0.7.0/pre_upgrade.sh b/k8s/upgrades/0.6.0-0.7.0/pre_upgrade.sh deleted file mode 100755 index 0b77fb6cd2..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/pre_upgrade.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Verify if upgrade needs to be performed # -# Check the version of OpenEBS installed # -# Check if default jiva storage pool or storage class can # -# conflict with the installed storage pool or class # -# Check if there are any PVs that need to be upgraded # -# # -################################################################ - -function print_usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Namespace where openebs control" - echo " plane pods like maya-apiserver are installed. " - exit 1 -} - -if [ "$#" -ne 1 ]; then - print_usage -fi - - -oens=$1 - - -echo -VERSION_INSTALLED=`kubectl get deploy -n $oens -o yaml \ - | grep m-apiserver | grep image: \ - | awk -F ':' '{print $3}'` - - -echo "Installed Version: $VERSION_INSTALLED" -if [ -z $VERSION_INSTALLED ] || [ $VERSION_INSTALLED = "0*" ]; then - echo "Unable to determine installed openebs version" - print_usage -elif test `echo $VERSION_INSTALLED | grep -c 0.6.` -eq 0; then - echo "Upgrade is supported only from 0.6.0" - exit 1 -fi - - -echo -kubectl get sp default 2>/dev/null -rc=$? -if [ $rc -eq 0 ]; then - POOL_PATH=`kubectl get sp default -o jsonpath='{.spec.path}'` - if [ $POOL_PATH = "/var/openebs" ]; then - echo "Found Jiva StoragePool named 'default' with path as /var/openebs" - else - echo "Found Jiva StoragePool named 'default' with cutomized path" - echo " After upgrading to 0.7.0, you will need to re-apply your StoragePool" - echo " or consider renaming the pool." - exit 1 - fi -else - echo "Jiva StoragePool named 'default' was not found" -fi - -echo -OLDER_PVS=`kubectl get pods --all-namespaces -l openebs/controller | wc -l` -if [ -z $OLDER_PVS ] || [ $OLDER_PVS -lt 2 ]; then - echo "There are no PVs that need to be upgraded to 0.7.0" -else - echo "Found PVs that need to be upgraded to 0.7.0" -fi - -echo -exit 0 - - diff --git a/k8s/upgrades/0.6.0-0.7.0/replica-patch-remove-labels.json b/k8s/upgrades/0.6.0-0.7.0/replica-patch-remove-labels.json deleted file mode 100644 index 77b367d9d2..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/replica-patch-remove-labels.json +++ /dev/null @@ -1,8 +0,0 @@ -[ - { "op": "remove", "path": "/metadata/labels/openebs~1replica" }, - { "op": "remove", "path": "/metadata/labels/vsm" }, - { "op": "remove", "path": "/spec/selector/matchLabels/openebs~1replica" }, - { "op": "remove", "path": "/spec/selector/matchLabels/vsm" }, - { "op": "remove", "path": "/spec/template/metadata/labels/openebs~1replica" }, - { "op": "remove", "path": "/spec/template/metadata/labels/vsm" } -] diff --git a/k8s/upgrades/0.6.0-0.7.0/sc.patch.tpl.yaml b/k8s/upgrades/0.6.0-0.7.0/sc.patch.tpl.yaml deleted file mode 100644 index 9a16071de4..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/sc.patch.tpl.yaml +++ /dev/null @@ -1,12 +0,0 @@ -metadata: - labels: - openebs.io/cas-type: jiva - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: VolumeMonitor - enabled: @volume-monitoring - - name: ReplicaCount - value: @jiva-replica-count - - name: StoragePool - value: @storage-pool diff --git a/k8s/upgrades/0.6.0-0.7.0/target-patch-remove-labels.json b/k8s/upgrades/0.6.0-0.7.0/target-patch-remove-labels.json deleted file mode 100644 index e563876853..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/target-patch-remove-labels.json +++ /dev/null @@ -1,8 +0,0 @@ -[ - { "op": "remove", "path": "/metadata/labels/openebs~1controller" }, - { "op": "remove", "path": "/metadata/labels/vsm" }, - { "op": "remove", "path": "/spec/selector/matchLabels/openebs~1controller" }, - { "op": "remove", "path": "/spec/selector/matchLabels/vsm" }, - { "op": "remove", "path": "/spec/template/metadata/labels/openebs~1controller" }, - { "op": "remove", "path": "/spec/template/metadata/labels/vsm" } -] diff --git a/k8s/upgrades/0.6.0-0.7.0/target-svc-patch-remove-labels.json b/k8s/upgrades/0.6.0-0.7.0/target-svc-patch-remove-labels.json deleted file mode 100644 index 04dfbf663f..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/target-svc-patch-remove-labels.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/selector/openebs~1controller" }, - { "op": "remove", "path": "/spec/selector/vsm" } -] diff --git a/k8s/upgrades/0.6.0-0.7.0/tests/setup-percona-with-0.6.sh b/k8s/upgrades/0.6.0-0.7.0/tests/setup-percona-with-0.6.sh deleted file mode 100755 index 73d2e8cb51..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/tests/setup-percona-with-0.6.sh +++ /dev/null @@ -1,24 +0,0 @@ -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.6.0.yaml -kubectl apply -f https://openebs.github.io/charts/openebs-storageclasses-0.6.0.yaml - -echo "Waiting for m-apiserver to be ready" -JSONPATH='{range .items[0]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; -until kubectl get pods -n openebs -l name=maya-apiserver -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; -do - echo -n "." - sleep 2; -done -echo "" -kubectl get pods -n openebs - -echo "Launching percona" -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/demo/percona/percona-openebs-deployment.yaml - -JSONPATH='{range .items[0]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; -until kubectl get pods -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; -do - echo -n "." - sleep 2; -done -echo "" -kubectl get pods diff --git a/k8s/upgrades/0.6.0-0.7.0/tests/test-jiva-default-sp.yaml b/k8s/upgrades/0.6.0-0.7.0/tests/test-jiva-default-sp.yaml deleted file mode 100644 index 2f17f3a1c7..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/tests/test-jiva-default-sp.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: openebs.io/v1alpha1 -kind: StoragePool -metadata: - name: default - type: hostdir -spec: - path: "/mnt/openebs" diff --git a/k8s/upgrades/0.6.0-0.7.0/tests/upgrade-operator-0.7.0.sh b/k8s/upgrades/0.6.0-0.7.0/tests/upgrade-operator-0.7.0.sh deleted file mode 100755 index cb67828059..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/tests/upgrade-operator-0.7.0.sh +++ /dev/null @@ -1,12 +0,0 @@ - -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.0.yaml - -echo "Waiting for default jiva pool to be ready" -until kubectl get sp -l openebs.io/version 2>&1 | grep -q "default"; -do - echo -n "." - sleep 2; -done -echo "" -kubectl get pods -n openebs - diff --git a/k8s/upgrades/0.6.0-0.7.0/upgrade_sc.sh b/k8s/upgrades/0.6.0-0.7.0/upgrade_sc.sh deleted file mode 100755 index e6d4e810a3..0000000000 --- a/k8s/upgrades/0.6.0-0.7.0/upgrade_sc.sh +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env bash - -############################################################################### -# STEP: Get Storage Classes # -############################################################################### - -# Get the list of storageclasses, delimited by ':' -sc_list=`kubectl get sc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}"` -rc=$?; -if [ $rc -ne 0 ]; -then - echo "ERROR: $rc"; - echo "Please ensure `kubectl` is installed and can access your cluster."; - exit; -fi - -echo "Check if openebs storage class parameters are moved to config annotation" -for sc in `echo $sc_list | tr ":" " "`; do - pt="";pt=`kubectl get sc $sc -o jsonpath="{.provisioner}"` - if [ "openebs.io/provisioner-iscsi" == "$pt" ]; - then - uc="";uc=`kubectl get sc $sc -o jsonpath="{.metadata.labels.openebs\.io/cas-type}"` - if [ ! -z $uc ]; then - echo "SC $sc already upgraded"; - continue - fi - - echo "Upgrading SC $sc"; - - replicas=`kubectl get sc $sc -o jsonpath="{.parameters.openebs\.io/jiva-replica-count}"` - pool=`kubectl get sc $sc -o jsonpath="{.parameters.openebs\.io/storage-pool}"` - monitoring=`kubectl get sc $sc -o jsonpath="{.parameters.openebs\.io/volume-monitor}"` - - if [ -z replicas ]; then replicas="3"; fi - sed "s/@jiva-replica-count[^ \"]*/$replicas/g" sc.patch.tpl.yaml > sc.patch.tpl.yaml.0 - - if [ -z pool ]; then replicas="default"; fi - sed "s/@storage-pool[^ \"]*/$pool/g" sc.patch.tpl.yaml.0 > sc.patch.tpl.yaml.1 - - if [ -z monitoring ]; then replicas="true"; fi - sed "s/@volume-monitor[^ \"]*/$monitoring/g" sc.patch.tpl.yaml.1 > sc.patch.yaml - - echo " openebs.io/jiva-replica-count -> ReplicaCount : $replicas" - echo " openebs.io/storage-pool -> StoragePool : $pool" - echo " openebs.io/volume-monitor -> VolumeMonitor : $monitoring" - - kubectl patch sc $sc -p "$(cat sc.patch.yaml)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - - rm -rf sc.patch.tpl.yaml.0 - rm -rf sc.patch.tpl.yaml.1 - rm -rf sc.patch.yaml - - #TODO - # Check if SC has other parameters and warn the user about patching them manually. - # or contact openebs dev. - - echo "Successfully upgraded $sc to 0.7" - fi -done - - diff --git a/k8s/upgrades/0.7.0-0.8.0/README.md b/k8s/upgrades/0.7.0-0.8.0/README.md deleted file mode 100644 index 0362a620e3..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/README.md +++ /dev/null @@ -1,128 +0,0 @@ -# UPGRADE FROM OPENEBS 0.7.0 TO 0.8.x - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.7.0 to 0.8.x - -The upgrade of OpenEBS is a two step process: -- *Step 1* - Upgrade the OpenEBS Operator -- *Step 2* - Upgrade the OpenEBS Volumes from previous versions (0.7.0) - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: Storage Engine pods like cStor or Jiva controller(aka target) & replica pods* - -## Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -### Download the upgrade scripts - -The easiest way to get all the upgrade scripts is via git clone. - -``` -mkdir upgrade-openebs -cd upgrade-openebs -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/upgrade/0.7.0-0.8.0/ -``` - -## Step 1: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments - -The upgrade steps vary depending on the way OpenEBS was installed, select one of the following: - -#### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you will have to download the 0.8.0 openebs-operator.yaml and customize it again** - -``` -# Starting with OpenEBS 0.6, all the components are installed in namespace `openebs` -# as opposed to `default` namespace in earlier releases. -# If Upgrading from 0.5.x, delete older operator. -#kubectl delete -f https://raw.githubusercontent.com/openebs/openebs/v0.5/k8s/openebs-operator.yaml -# Wait for objects to be delete, you can check using `kubectl get deploy` - -#Upgrade to 0.8 OpenEBS Operator -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.0.yaml -``` - -#### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -Before upgrading using helm, please review the default values available with latest stable/openebs chart. (https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to update OpenEBS. [More](https://hub.helm.sh/charts/stable/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values stable/openebs --version 0.8.1 - ``` -- If not, customize the values into your copy (say custom-values.yaml), by copying the content from above default yamls and edit the values to suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade stable/openebs --version 0.8.1 -f custom-values.yaml` - ``` - -##### Note: 0.8.1 is the helm chart version that corresponds to OpenEBS 0.8.0 version. All available openebs helm charts can be found here https://hub.kubeapps.com/charts/stable/openebs. - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.8 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.8: -- openebs-charts [PR#2314](https://github.com/openebs/openebs/pull/2314) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 2: Upgrade the OpenEBS Pools and Volumes - -Even after the OpenEBS Operator has been upgraded to 0.8, the cStor Storage Pools and volumes (both jiva and cStor) will continue to work with older versions. Use the following steps to upgrade the cStor Pools and Volumes. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/readonlyvolumes.html -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -### Upgrade the Jiva based OpenEBS PV - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -``` -./jiva_volume_update.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -``` -NAME AGE -cstor-sparse-pool 24m -``` - -``` -./cstor_pool_update.sh cstor-sparse-pool openebs -``` - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -``` - -``` -./cstor_target_update.sh pvc-1085415d-f84c-11e8-aadf-42010a8000bb openebs -``` - diff --git a/k8s/upgrades/0.7.0-0.8.0/cstor-pool-patch.tpl.json b/k8s/upgrades/0.7.0-0.8.0/cstor-pool-patch.tpl.json deleted file mode 100644 index de3424ade6..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/cstor-pool-patch.tpl.json +++ /dev/null @@ -1,47 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "0.8.0" - } - }, - "spec": { - "template": { - "spec": { - "containers":[ - { - "name": "cstor-pool", - "image": "quay.io/openebs/cstor-pool:0.8.0" - }, - { - "name": "cstor-pool-mgmt", - "image": "quay.io/openebs/cstor-pool-mgmt:0.8.0", - "env": [ - { - "name": "OPENEBS_IO_CSTOR_ID", - "value": "@csp_uuid" - }, - { - "name": "POD_NAME", - "valueFrom": { - "fieldRef": { - "apiVersion": "v1", - "fieldPath": "metadata.name" - } - } - }, - { - "name": "NAMESPACE", - "valueFrom": { - "fieldRef": { - "apiVersion": "v1", - "fieldPath": "metadata.namespace" - } - } - } - ] - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.7.0-0.8.0/cstor-target-patch.tpl.json b/k8s/upgrades/0.7.0-0.8.0/cstor-target-patch.tpl.json deleted file mode 100644 index 0d62aa9101..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/cstor-target-patch.tpl.json +++ /dev/null @@ -1,58 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "0.8.0" - } - }, - "spec": { - "template": { - "metadata": { - "annotations": { - "prometheus.io/path": "/metrics", - "prometheus.io/port": "9500", - "prometheus.io/scrape": "true" - } - }, - "spec": { - "containers": [ - { - "name": "cstor-istgt", - "image": "quay.io/openebs/cstor-istgt:0.8.0" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:0.8.0" - }, - { - "name": "cstor-volume-mgmt", - "image": "quay.io/openebs/cstor-volume-mgmt:0.8.0", - "env": [ - { - "name": "OPENEBS_IO_CSTOR_VOLUME_ID", - "value": "@cv_uuid" - }, - { - "name": "NODE_NAME", - "valueFrom": { - "fieldRef": { - "apiVersion": "v1", - "fieldPath": "spec.nodeName" - } - } - }, - { - "name": "POD_NAME", - "valueFrom": { - "fieldRef": { - "apiVersion": "v1", - "fieldPath": "metadata.name" - } - } - } - ] - } - ] - } - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.7.0-0.8.0/cstor-target-svc-patch.json b/k8s/upgrades/0.7.0-0.8.0/cstor-target-svc-patch.json deleted file mode 100644 index b57a09de1d..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/cstor-target-svc-patch.json +++ /dev/null @@ -1,35 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "0.8.0" - } - }, - "spec": { - "ports": [ - { - "name": "cstor-iscsi", - "port": 3260, - "protocol": "TCP", - "targetPort": 3260 - }, - { - "name": "mgmt", - "port": 6060, - "protocol": "TCP", - "targetPort": 6060 - }, - { - "name": "cstor-grpc", - "port": 7777, - "protocol": "TCP", - "targetPort": 7777 - }, - { - "name": "exporter", - "port": 9500, - "protocol": "TCP", - "targetPort": 9500 - } - ] - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.7.0-0.8.0/cstor_pool_update.sh b/k8s/upgrades/0.7.0-0.8.0/cstor_pool_update.sh deleted file mode 100755 index b0c4ce1c01..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/cstor_pool_update.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get SPC name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the SPC name using: kubectl get spc" - echo " Get the namespace where openebs" - echo " pods are installed" - exit 1 -} - -function setDeploymentRecreateStrategy() { - ns=$1 - dn=$2 - currStrategy=`kubectl get deploy -n $ns $dn -o jsonpath="{.spec.strategy.type}"` - - if [ $currStrategy = "RollingUpdate" ]; then - kubectl patch deployment --namespace $ns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - - -if [ "$#" -ne 2 ]; then - usage -fi - -spc=$1 -ns=$2 - -# Get the list of pool deployments for given SPC, delimited by ':' -pool_deploys=`kubectl get deploy -n $ns \ - -l openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}"` - -echo "Patching Pool Deployment upgrade strategy as recreate" -for pool_dep in `echo $pool_deploys | tr ":" " "`; do - setDeploymentRecreateStrategy $ns $pool_dep -done - - -echo "Patching Pool Deployment with new image" -for pool_dep in `echo $pool_deploys | tr ":" " "`; do - pool_rs=$(kubectl get rs -n openebs \ - -o jsonpath="{range .items[?(@.metadata.ownerReferences[0].name=='$pool_dep')]}{@.metadata.name}{end}") - echo "$pool_dep -> rs is $pool_rs" - - #fetch the csp_uuid - csp_uuid="";csp_uuid=`kubectl get csp -n $ns $pool_dep -o jsonpath="{.metadata.uid}"` - echo "$pool_dep -> csp uuid is $csp_uuid" - if [ -z "$csp_uuid" ]; - then - echo "Error: Unable to fetch csp uuid"; - exit 1 - fi - - sed "s/@csp_uuid[^ \"]*/$csp_uuid/g" cstor-pool-patch.tpl.json > cstor-pool-patch.json - - kubectl patch deployment --namespace $ns $pool_dep -p "$(cat cstor-pool-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - rollout_status=$(kubectl rollout status --namespace $ns deployment/$pool_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo "ERROR: $rc"; exit; fi - kubectl delete rs $pool_rs --namespace $ns - rm cstor-pool-patch.json -done - -echo "Successfully upgraded $spc to 0.8. Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.7.0-0.8.0/cstor_target_update.sh b/k8s/upgrades/0.7.0-0.8.0/cstor_target_update.sh deleted file mode 100755 index 9e325e52b6..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/cstor_target_update.sh +++ /dev/null @@ -1,87 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get PV name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - echo " Get the namespace where openebs" - echo " pods are installed" - exit 1 -} - -function setDeploymentRecreateStrategy() { - ns=$1 - dn=$2 - currStrategy=`kubectl get deploy -n $ns $dn -o jsonpath="{.spec.strategy.type}"` - - if [ $currStrategy = "RollingUpdate" ]; then - kubectl patch deployment --namespace $ns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - - -if [ "$#" -ne 2 ]; then - usage -fi - -pv=$1 -ns=$2 - -# Get the target deployment for given PV -target_dep=$(kubectl get deploy -n $ns \ - -l openebs.io/persistent-volume=$pv \ - -o jsonpath="{range .items[*]}{@.metadata.name}{end}") - -echo "Patching Target Deployment ${target_deploy}" - -setDeploymentRecreateStrategy $ns $target_dep - -target_rs=$(kubectl get rs -n openebs \ - -l openebs.io/persistent-volume=$pv \ - -o jsonpath="{range .items[?(@.metadata.ownerReferences[0].name=='$target_dep')]}{@.metadata.name}{end}") -echo "$target_dep -> rs is $target_rs" - -#fetch the cstor volume uid as cv_uuid -cv_uuid="";cv_uuid=`kubectl get cstorvolume -n $ns $pv -o jsonpath="{.metadata.uid}"` -echo "$target_dep -> cv uuid is $cv_uuid" -if [ -z "$cv_uuid" ]; -then - echo "Error: Unable to fetch cv uuid"; - exit 1 -fi - -sed "s/@cv_uuid[^ \"]*/$cv_uuid/g" cstor-target-patch.tpl.json > cstor-target-patch.json - -kubectl patch deployment --namespace $ns $target_dep -p "$(cat cstor-target-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi -rollout_status=$(kubectl rollout status --namespace $ns deployment/$target_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi -kubectl delete rs $target_rs --namespace $ns -rm cstor-target-patch.json - -# Get the target deployment for given PV -target_svc=$(kubectl get service -n $ns \ - -l openebs.io/persistent-volume=$pv \ - -o jsonpath="{range .items[*]}{@.metadata.name}{end}") - -echo "Patching Target Service ${target_svc}" -kubectl patch service --namespace $ns $target_svc -p "$(cat cstor-target-svc-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -echo "Successfully upgraded $pv target to 0.8. Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.7.0-0.8.0/jiva-replica-patch.tpl.json b/k8s/upgrades/0.7.0-0.8.0/jiva-replica-patch.tpl.json deleted file mode 100644 index cf5a3f0080..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/jiva-replica-patch.tpl.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "0.8.0" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "@r_name", - "image": "quay.io/openebs/jiva:0.8.0" - } - ], - "nodeSelector": { - "openebs-pv-@pv-name": "@replica_node_label" - } - } - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.7.0-0.8.0/jiva-target-patch.tpl.json b/k8s/upgrades/0.7.0-0.8.0/jiva-target-patch.tpl.json deleted file mode 100644 index 37e4c7a4fc..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/jiva-target-patch.tpl.json +++ /dev/null @@ -1,30 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "0.8.0" - } - }, - "spec": { - "template": { - "metadata": { - "annotations": { - "prometheus.io/path": "/metrics", - "prometheus.io/port": "9500", - "prometheus.io/scrape": "true" - } - }, - "spec": { - "containers": [ - { - "name": "@c_name", - "image": "quay.io/openebs/jiva:0.8.0" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:0.8.0" - } - ] - } - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.7.0-0.8.0/jiva-target-svc-patch.json b/k8s/upgrades/0.7.0-0.8.0/jiva-target-svc-patch.json deleted file mode 100644 index 91a304c54c..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/jiva-target-svc-patch.json +++ /dev/null @@ -1,29 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "0.8.0" - } - }, - "spec": { - "ports": [ - { - "name": "iscsi", - "port": 3260, - "protocol": "TCP", - "targetPort": 3260 - }, - { - "name": "api", - "port": 9501, - "protocol": "TCP", - "targetPort": 9501 - }, - { - "name": "exporter", - "port": 9500, - "protocol": "TCP", - "targetPort": 9500 - } - ] - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.7.0-0.8.0/jiva_volume_update.sh b/k8s/upgrades/0.7.0-0.8.0/jiva_volume_update.sh deleted file mode 100755 index e17431f866..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/jiva_volume_update.sh +++ /dev/null @@ -1,148 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -function setDeploymentRecreateStrategy() { - ns=$1 - dn=$2 - currStrategy=`kubectl get deploy -n $ns $dn -o jsonpath="{.spec.strategy.type}"` - - if [ $currStrategy = "RollingUpdate" ]; then - kubectl patch deployment --namespace $ns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 -replica_node_label="openebs-jiva" - -pvc=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` -ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` - -################################################################ -# STEP: Generate deploy, replicaset and container names from PV# -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc", # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -# ctrl-cont: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl-con # -################################################################ - -c_dep=$(echo $pv-ctrl); c_name=$(echo $c_dep-con) -r_dep=$(echo $pv-rep); r_name=$(echo $r_dep-con) -c_svc=$(echo $c_dep-svc) - -# Get the number of replicas configured. -# This field is currently not used, but can add additional validations -# based on the nodes and expected number of replicas -rep_count=`kubectl get deploy $r_dep --namespace $ns -o jsonpath="{.spec.replicas}"` - -# Get the list of nodes where replica pods are running, delimited by ':' -rep_nodenames=`kubectl get pods -n $ns $rep_labels \ - -l "openebs.io/persistent-volume=$pv" -l "openebs.io/replica=jiva-replica" \ - -o jsonpath="{range .items[*]}{@.spec.nodeName}:{end}"` - -echo "Checking if the node with replica pod has been labeled with $replica_node_label" -for rep_node in `echo $rep_nodenames | tr ":" " "`; do - nl="";nl=`kubectl get nodes $rep_node -o jsonpath="{.metadata.labels.openebs-pv-$pv}"` - if [ -z "$nl" ]; - then - echo "Labeling $rep_node"; - kubectl label node $rep_node "openebs-pv-${pv}=$replica_node_label" - fi -done - - -echo "Patching Replica Deployment upgrade strategy as recreate" -setDeploymentRecreateStrategy $ns $r_dep - -echo "Patching Target Deployment upgrade strategy as recreate" -setDeploymentRecreateStrategy $ns $c_dep - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. -c_rs=$(kubectl get rs -o name --namespace $ns | grep $c_dep | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns | grep $r_dep | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate container names # -# # -# NOTES: Placeholder "pvc--ctrl/rep-con in the # -# patch files are replaced with container names derived from # -# the PV in the previous step # -################################################################ - -sed "s/@replica_node_label[^ \"]*/$replica_node_label/g" jiva-replica-patch.tpl.json > jiva-replica-patch.tpl.json.0 -sed "s/@pv-name[^ \"]*/$pv/g" jiva-replica-patch.tpl.json.0 > jiva-replica-patch.tpl.json.1 -sed "s/@r_name[^ \"]*/$r_name/g" jiva-replica-patch.tpl.json.1 > jiva-replica-patch.json - -sed "s/@c_name[^ \"]*/$c_name/g" jiva-target-patch.tpl.json > jiva-target-patch.json - - -###################################################################### -# STEP: Patch OpenEBS volume deployments (jiva-target, jiva-replica) # -# # -# NOTES: Strategic merge patch is used to update the volume w/ # -# rollout status verification # -###################################################################### - -# PATCH JIVA REPLICA DEPLOYMENT #### -echo "Upgrading Replica Deployment to 0.8" -kubectl patch deployment --namespace $ns $r_dep -p "$(cat jiva-replica-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $r_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - -#### PATCH TARGET DEPLOYMENT #### -echo "Upgrading Target Deployment to 0.8" -kubectl patch deployment --namespace $ns $c_dep -p "$(cat jiva-target-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -kubectl delete rs $c_rs --namespace $ns - -rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) -rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; -then echo "ERROR: $rc"; exit; fi - - -#### PATCH TARGET SERVICE #### -echo "Upgrading Target Service to 0.8" -kubectl patch service --namespace $ns $c_svc -p "$(cat jiva-target-svc-patch.json)" -rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - -echo "Clearing temporary files" -rm jiva-replica-patch.tpl.json.0 -rm jiva-replica-patch.tpl.json.1 -rm jiva-replica-patch.json -rm jiva-target-patch.json - -echo "Successfully upgraded $pv to 0.8. Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.7.0-0.8.0/patch-strategy-recreate.json b/k8s/upgrades/0.7.0-0.8.0/patch-strategy-recreate.json deleted file mode 100644 index 8c6c5c60af..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/patch-strategy-recreate.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/strategy/rollingUpdate" }, - { "op": "replace", "path": "/spec/strategy/type", "value": "Recreate" } -] diff --git a/k8s/upgrades/0.7.0-0.8.0/pre_upgrade.sh b/k8s/upgrades/0.7.0-0.8.0/pre_upgrade.sh deleted file mode 100755 index 4f2db6ecb5..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/pre_upgrade.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Verify if upgrade needs to be performed # -# Check the version of OpenEBS installed # -# Check if default jiva storage pool or storage class can # -# conflict with the installed storage pool or class # -# Check if there are any PVs that need to be upgraded # -# # -################################################################ - -function print_usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Namespace where openebs control" - echo " plane pods like maya-apiserver are installed. " - exit 1 -} - -if [ "$#" -ne 1 ]; then - print_usage -fi - - -oens=$1 - - -echo -VERSION_INSTALLED=`kubectl get deploy -n openebs -o yaml \ - | grep m-apiserver | grep image: \ - | awk -F ':' '{print $3}'` - - -echo "Installed Version: $VERSION_INSTALLED" -if [ -z $VERSION_INSTALLED ] || [ $VERSION_INSTALLED = "0*" ]; then - echo "Unable to determine installed openebs version" - print_usage -elif test `echo $VERSION_INSTALLED | grep -c 0.7.` -eq 0; then - echo "Upgrade is supported only from 0.7.0" - exit 1 -fi - - -echo -kubectl get sp default 2>/dev/null -rc=$? -if [ $rc -eq 0 ]; then - POOL_PATH=`kubectl get sp default -o jsonpath='{.spec.path}'` - if [ $POOL_PATH = "/var/openebs" ]; then - echo "Found Jiva StoragePool named 'default' with path as /var/openebs" - else - echo "Found Jiva StoragePool named 'default' with cutomized path" - echo " After upgrading to 0.8.0, you will need to re-apply your StoragePool" - echo " or consider renaming the pool." - exit 1 - fi -else - echo "Jiva StoragePool named 'default' was not found" -fi - -echo -OLDER_PVS=`kubectl get pods --all-namespaces -l openebs/controller | wc -l` -if [ -z $OLDER_PVS ] || [ $OLDER_PVS -lt 2 ]; then - echo "There are no PVs that need to be upgraded to 0.8.0" -else - echo "Found PVs that need to be upgraded to 0.8.0" -fi - -echo -exit 0 - - diff --git a/k8s/upgrades/0.7.0-0.8.0/tests/setup-percona-with-0.7.sh b/k8s/upgrades/0.7.0-0.8.0/tests/setup-percona-with-0.7.sh deleted file mode 100755 index 0823bd4795..0000000000 --- a/k8s/upgrades/0.7.0-0.8.0/tests/setup-percona-with-0.7.sh +++ /dev/null @@ -1,26 +0,0 @@ -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.2.yaml - -echo "Waiting for m-apiserver to be ready" -JSONPATH='{range .items[0]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; -until kubectl get pods -n openebs -l name=maya-apiserver -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; -do - echo -n "." - sleep 2; -done -echo "" -kubectl get pods -n openebs - -echo "Launching percona with jiva" -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/percona/percona-openebs-deployment.yaml - -JSONPATH='{range .items[0]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; -until kubectl get pods -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; -do - echo -n "." - sleep 2; -done -echo "" -kubectl get pods - -echo "Launching percona with cstor" -kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/percona/percona-openebs-cstor-sparse-deployment.yaml diff --git a/k8s/upgrades/0.8.0-0.8.1/README.md b/k8s/upgrades/0.8.0-0.8.1/README.md deleted file mode 100644 index 6c290b5bca..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/README.md +++ /dev/null @@ -1,135 +0,0 @@ -# UPGRADE FROM OPENEBS 0.8.0 TO 0.8.1 - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.8.0 to 0.8.1 - -The upgrade of OpenEBS is a three step process: -- *Step 1* - Checking the openebs version labels -- *Step 2* - Upgrade the OpenEBS Operator -- *Step 3* - Upgrade the OpenEBS Volumes from previous versions (0.8.0) - -#### Note: It is mandatory to make sure to that all volumes are running at version 0.8.0 before the upgrade. - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: Storage Engine pods like cStor or Jiva controller(aka target) & replica pods* - -## Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -### Download the upgrade scripts - -The easiest way to get all the upgrade scripts is via git clone. - -``` -mkdir upgrade-openebs -cd upgrade-openebs -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/upgrades/0.8.0-0.8.1/ -``` - -## Step 1: Checking the openebs version labels - -- Run `./pre-check.sh` to get all the openebs volume resources not having `openebs.io/version` tag. -- Run `./labeltagger.sh 0.8.0` to add `openebs.io/version` label to all the openebs volume resources. - -#### Please make sure that all pods are back to running state before proceeding to Step 2 -### Note: It is ok to get no resources to label in pre-check process. The pre-check is to help users upgrading or upgraded from 0.7. - -## Step 2: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments - -The upgrade steps vary depending on the way OpenEBS was installed, select one of the following: - -#### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you will have to download the 0.8.1 openebs-operator.yaml and customize it again** - -``` -#Change updateStrategy of openebs ndm pods from onDelete to RollingUpdate -kubectl patch ds openebs-ndm -n openebs --patch='{"spec":{"updateStrategy":{"type": "RollingUpdate"}}}' - -#Upgrade to 0.8.1 OpenEBS Operator -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.1.yaml -``` - -#### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -Before upgrading using helm, please review the default values available with latest stable/openebs chart. (https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to update OpenEBS. [More](https://hub.helm.sh/charts/stable/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values stable/openebs --version 0.8.3 - ``` -- If not, customize the values into your copy (say custom-values.yaml), by copying the content from above default yamls and edit the values to suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade stable/openebs --version 0.8.3 -f custom-values.yaml` - ``` - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.8.1 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.8.1: -- openebs-charts [PR#2352](https://github.com/openebs/openebs/pull/2352) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 3: Upgrade the OpenEBS Pools and Volumes - -Even after the OpenEBS Operator has been upgraded to 0.8.1, the cStor Storage Pools and volumes (both jiva and cStor) will continue to work with older versions. Use the following steps in the same order to upgrade cStor Pools and volumes. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/readonlyvolumes.html -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -### Upgrade the Jiva based OpenEBS PV - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -``` -./jiva_volume_upgrade.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -``` -NAME AGE -cstor-sparse-pool 24m -``` - -``` -./cstor_pool_upgrade.sh cstor-sparse-pool openebs -``` -Make sure that this step completes successfully before proceeding to next step. - - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -``` - -``` -./cstor_volume_upgrade.sh pvc-1085415d-f84c-11e8-aadf-42010a8000bb openebs -``` diff --git a/k8s/upgrades/0.8.0-0.8.1/cr-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/cr-patch.tpl.json deleted file mode 100644 index 2a01f8a4f8..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cr-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - } - } -} diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor-pool-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/cstor-pool-patch.tpl.json deleted file mode 100644 index 411d17fb93..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor-pool-patch.tpl.json +++ /dev/null @@ -1,43 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "cstor-pool", - "image": "quay.io/openebs/cstor-pool:@pool_version@", - "env": [ - { - "name": "OPENEBS_IO_CSTOR_ID", - "value": "@csp_uuid@" - } - ], - "livenessProbe": { - "exec": { - "command": [ - "/bin/sh", - "-c", - "zfs set io.openebs:livenesstimestap='$(date)' cstor-$OPENEBS_IO_CSTOR_ID" - ] - }, - "failureThreshold": 3, - "initialDelaySeconds": 300, - "periodSeconds": 10, - "successThreshold": 1, - "timeoutSeconds": 30 - } - }, - { - "name": "cstor-pool-mgmt", - "image": "quay.io/openebs/cstor-pool-mgmt:@pool_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor-target-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/cstor-target-patch.tpl.json deleted file mode 100644 index b860e7742c..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor-target-patch.tpl.json +++ /dev/null @@ -1,38 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/storage-class": "@sc_name@" - } - }, - "spec": { - "containers": [ - { - "name": "cstor-istgt", - "image": "quay.io/openebs/cstor-istgt:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - }, - { - "name": "cstor-volume-mgmt", - "image": "quay.io/openebs/cstor-volume-mgmt:@target_version@" - } - ] - } - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor-target-svc-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/cstor-target-svc-patch.tpl.json deleted file mode 100644 index 78df8a6297..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor-target-svc-patch.tpl.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/pvc-namespace":"@pvc-namespace@", - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@", - "openebs.io/persistent-volume-claim":"@pvc-name@" - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor-volume-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/cstor-volume-patch.tpl.json deleted file mode 100644 index 26c8b45cdb..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor-volume-patch.tpl.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@" - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor-volume-replica-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/cstor-volume-replica-patch.tpl.json deleted file mode 100644 index 26c8b45cdb..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor-volume-replica-patch.tpl.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@" - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor_pool_post_upgrade.sh b/k8s/upgrades/0.8.0-0.8.1/cstor_pool_post_upgrade.sh deleted file mode 100755 index fde005c93e..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor_pool_post_upgrade.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/usr/bin/env bash - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the SPC name using: kubectl get spc" - echo " Get the namespace where pool pods" - echo " corresponding to SPC are deployed" - exit 1 -} - -if [ "$#" -ne 2 ]; then - usage -fi - -spc=$1 -ns=$2 -retry=false - -## Fetching the pod names corresponding to spc -pool_pods=$(kubectl get po -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath='{range .items[*]}{@.metadata.name}:{end}') -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get the pool pods related to spc $spc" - retry=true -fi - -## Setting the quorum enabled in pool pods ### -for pool_pod in `echo $pool_pods | tr ":" " "`; do - pool_name="" - cstor_uid="" - cstor_uid=$(kubectl get pod $pool_pod -n $ns \ - -o jsonpath="{.spec.containers[*].env[?(@.name=='OPENEBS_IO_CSTOR_ID')].value}" | awk '{print $1}') - pool_name="cstor-$cstor_uid" - quorum_set=$(kubectl exec $pool_pod -n $ns -c cstor-pool-mgmt -- zfs set quorum=on $pool_name) - rc=$? - if [[ ($rc -ne 0) ]]; then - echo "Error: failed to set quorum for pool $pool_name" - retry=true - fi - output=$(kubectl exec $pool_pod -n $ns -c cstor-pool-mgmt -- zfs get quorum) - rc=$? - if [ $rc -ne 0 ]; then - echo "ERROR: while executing zfs get quorum for pool $pool_name, error: $rc" - retry=true - fi - no_of_non_quorum_vol=$(echo $output | grep -wo off | wc -l) - if [ $no_of_non_quorum_vol -ne 0 ]; then - echo "Few($no_of_non_quorum_vol) of quorum values are having inappropriate values for quorum" - retry=true - fi -done - -if [ $retry == true ]; then - echo "Post upgrade for $spc is failed." - echo "Please retry by running ./$0 $spc $ns" - exit 1 -fi - -echo "Post upgrade for pools in $spc is done successfully" diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor_pool_upgrade.sh b/k8s/upgrades/0.8.0-0.8.1/cstor_pool_upgrade.sh deleted file mode 100755 index 4341895ae4..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor_pool_upgrade.sh +++ /dev/null @@ -1,165 +0,0 @@ -#!/usr/bin/env bash - -########################################################################### -# STEP: Get SPC name and namespace where OpenEBS is deployed as arguments # -# # -# NOTES: Obtain the pool deployments to perform upgrade operation # -########################################################################### - -pool_upgrade_version="0.8.1" -current_version="0.8.0" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the SPC name using: kubectl get spc" - echo " Get the namespace where pool pods" - echo " corresponding to SPC are deployed" - exit 1 -} - -##Checking the version of OpenEBS #### -function verify_openebs_version() { - local resource=$1 - local name_res=$2 - local openebs_version=$(kubectl get $resource $name_res -n $ns \ - -o jsonpath="{.metadata.labels.openebs\.io/version}") - - if [[ $openebs_version != $current_version ]] && [[ $openebs_version != $pool_upgrade_version ]]; then - echo "Expected version of $name_res in $resource is $current_version but got $openebs_version";exit 1; - fi - echo $openebs_version -} - -## Starting point -if [ "$#" -ne 2 ]; then - usage -fi - -spc=$1 -ns=$2 - -### Get the deployment pods which are in not running state that are related to provided spc ### -pending_pods=$(kubectl get po -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath='{.items[?(@.status.phase!="Running")].metadata.name}') - -## If any deployments pods are in not running state then exit the upgrade process ### -if [ $(echo $pending_pods | wc -w) -ne 0 ]; then - echo "To continue with upgrade script make sure all the deployment pods corresponding to $spc must be in running state" - exit 1 -fi - -### Get the csp list which are related to the given spc ### -csp_list=$(kubectl get csp -l openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get csp related to spc $spc" - exit 1 -fi - -################################################################ -# STEP: Update patch files with pool upgrade version # -# # -################################################################ - -sed "s/@pool_version@/$pool_upgrade_version/g" cr-patch.tpl.json > cr_patch.json - -echo "Patching the csp resource" -for csp in `echo $csp_list | tr ":" " "`; do - version=$(verify_openebs_version "csp" $csp) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - ## Patching the csp resource - kubectl patch csp $csp -p "$(cat cr_patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Error occurred while upgrading the csp: $csp Exit Code: $rc"; exit; fi -done - -echo "Patching Pool Deployment with new image" -for csp in `echo $csp_list | tr ":" " "`; do - ## Get the pool deployment corresponding to csp - pool_dep=$(kubectl get deploy -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{.items[?(@.metadata.labels.openebs\.io/cstor-pool=='$csp')].metadata.name}") - - version=$(verify_openebs_version "deploy" $pool_dep) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - - ## Get the replica set corresponding to the deployment ## - pool_rs=$(kubectl get rs -n $ns \ - -o jsonpath="{range .items[?(@.metadata.ownerReferences[0].name=='$pool_dep')]}{@.metadata.name}{end}") - echo "$pool_dep -> rs is $pool_rs" - - ## Get the csp_uuid ## - csp_uuid="";csp_uuid=`kubectl get csp -n $ns $pool_dep -o jsonpath="{.metadata.uid}"` - echo "$pool_dep -> csp uuid is $csp_uuid" - if [ -z "$csp_uuid" ]; - then - echo "Error: Unable to fetch csp uuid"; exit 1 - fi - - ## Modifies the cstor-pool-patch template with the original values ## - sed "s/@csp_uuid@/$csp_uuid/g" cstor-pool-patch.tpl.json | sed "s/@pool_version@/$pool_upgrade_version/g" > cstor-pool-patch.json - - ## Patch the deployment file ### - kubectl patch deployment --namespace $ns $pool_dep -p "$(cat cstor-pool-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: Failed to patch $pool_dep $rc"; exit; fi - rollout_status=$(kubectl rollout status --namespace $ns deployment/$pool_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo "ERROR: Failed to rollout status for $pool_dep error: $rc"; exit; fi - - ## Deleting the old replica set corresponding to deployment - kubectl delete rs $pool_rs --namespace $ns - - ## Cleaning the temporary patch file - rm cstor-pool-patch.json -done - -### Get the sp list which are related to the given spc ### -sp_list=$(kubectl get sp -l openebs.io/cas-type=cstor,openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get sp related to spc $spc" - exit 1 -fi - -### Patch sp resource### -echo "Patching the SP resource" -for sp in `echo $sp_list | tr ":" " "`; do - version=$(verify_openebs_version "sp" $sp) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - kubectl patch sp $sp -p "$(cat cr_patch.json)" --type=merge - rc=$? - if [ $rc -ne 0 ]; then echo "Error: failed to patch for SP resource $sp Exit Code: $rc"; exit; fi -done - -###Cleaning temporary patch file -rm cr_patch.json - -echo "Successfully upgraded $spc to $pool_upgrade_version" -echo "Running post pool upgrade scripts for $spc..." - -./cstor_pool_post_upgrade.sh $spc $ns -rc=$? -if [ $rc -eq 0 ]; then echo "Post upgrade of $spc is done successfully to $pool_upgrade_version Please run volume upgrade scripts."; exit; fi - -exit 0 diff --git a/k8s/upgrades/0.8.0-0.8.1/cstor_volume_upgrade.sh b/k8s/upgrades/0.8.0-0.8.1/cstor_volume_upgrade.sh deleted file mode 100755 index b5fbc052aa..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/cstor_volume_upgrade.sh +++ /dev/null @@ -1,207 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ -target_upgrade_version="0.8.1" -current_version="0.8.0" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - echo " Get the namespace where openebs" - echo " pods are installed" - exit 1 -} - -function setDeploymentRecreateStrategy() { - dns=$1 # deployment namespace - dn=$2 # deployment name - currStrategy=`kubectl get deploy -n $dns $dn -o jsonpath="{.spec.strategy.type}"` - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get the deployment stratergy for $dn | Exit code: $rc"; exit; fi - - if [ $currStrategy != "Recreate" ]; then - kubectl patch deployment --namespace $dns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $dn | Exit code: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - -if [ "$#" -ne 2 ]; then - usage -fi - -pv=$1 -ns=$2 - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found";exit 1; -fi - -# Check if CASType is cstor -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}"` -if [ $cas_type != "cstor" ]; then - echo "Cstor volume not found";exit 1; -fi - -sc_ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -sc_name=`kubectl get pv $pv -o jsonpath="{.spec.storageClassName}"` -sc_res_ver=`kubectl get sc $sc_name -n $sc_ns -o jsonpath="{.metadata.resourceVersion}"` -pvc_name=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` -pvc_namespace=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc", # -# # -# c-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-target # -################################################################# - -c_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/target=cstor-target -o jsonpath="{.items[*].metadata.name}") -c_svc=$(kubectl get svc -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/target-service=cstor-target-svc -o jsonpath="{.items[*].metadata.name}") -c_vol=$(kubectl get cstorvolumes -l openebs.io/persistent-volume=$pv -n $ns -o jsonpath="{.items[*].metadata.name}") -c_replicas=$(kubectl get cvr -n $ns -l openebs.io/persistent-volume=$pv -o jsonpath="{range .items[*]}{@.metadata.name};{end}" | tr ";" "\n") - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs=$(kubectl get rs -n $ns -o name -l openebs.io/persistent-volume=$pv | cut -d '/' -f 2) - - -# Check if openebs resources exist and provisioned version is 0.8 - -if [[ -z $c_rs ]]; then - echo "Target Replica set not found"; exit 1; -fi - -if [[ -z $c_dep ]]; then - echo "Target deployment not found"; exit 1; -fi - -if [[ -z $c_svc ]]; then - echo "Target svc not found";exit 1; -fi - -if [[ -z $c_vol ]]; then - echo "CstorVolumes CR not found"; exit 1; -fi - -if [[ -z $c_replicas ]]; then - echo "Cstor Volume Replica CR not found"; exit 1; -fi - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" != "$current_version" ]] && [[ "$controller_version" != "$target_upgrade_version" ]] ; then - echo "Current cstor target deloyment $c_dep version is not $current_version or $target_upgrade_version";exit 1; -fi - -controller_service_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_service_version" != "$current_version" ]] && [[ "$controller_service_version" != "$target_upgrade_version" ]]; then - echo "Current cstor target service $c_svc version is not $current_version or $target_upgrade_version";exit 1; -fi - -cstor_volume_version=`kubectl get cstorvolumes $c_vol -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$cstor_volume_version" != "$current_version" ]] && [[ "$cstor_volume_version" != "$target_upgrade_version" ]]; then - echo "Current cstor volume $c_vol version is not $current_version or $target_upgrade_version";exit 1; -fi - -for replica in $c_replicas -do - replica_version=`kubectl get cvr $replica -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` - if [[ "$replica_version" != "$current_version" ]] && [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "CStor volume replica $replica version is not $current_version"; exit 1; - fi -done - - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -sed "s/@sc_name@/$sc_name/g" cstor-target-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@target_version@/$target_upgrade_version/g" > cstor-target-patch.json -sed "s/@sc_name@/$sc_name/g" cstor-target-svc-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@target_version@/$target_upgrade_version/g" | sed "s/@pvc-name@/$pvc_name/g" | sed "s/@pvc-namespace@/$pvc_namespace/g" > cstor-target-svc-patch.json -sed "s/@sc_name@/$sc_name/g" cstor-volume-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@target_version@/$target_upgrade_version/g" > cstor-volume-patch.json -sed "s/@sc_name@/$sc_name/g" cstor-volume-replica-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@target_version@/$target_upgrade_version/g" > cstor-volume-replica-patch.json - -################################################################################# -# STEP: Patch OpenEBS volume deployments (cstor-target, cstor-svc) # -################################################################################# - - -# #### PATCH TARGET DEPLOYMENT #### - -if [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Deployment to $target_upgrade_version" - - # Setting deployment strategy to recreate - setDeploymentRecreateStrategy $ns $c_dep - - kubectl patch deployment --namespace $ns $c_dep -p "$(cat cstor-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstor target deployment $c_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete cstor replica set $c_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo "Failed to rollout for deployment $c_dep | Exit code: $rc"; exit; fi -else - echo "Target deployment $c_dep is already at $target_upgrade_version" -fi - -# #### PATCH TARGET SERVICE #### -if [[ "$controller_service_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Service to $target_upgrade_version" - kubectl patch service --namespace $ns $c_svc -p "$(cat cstor-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch service $svc | Exit code: $rc"; exit; fi -else - echo "Target service $c_svc is already at $target_upgrade_version" -fi - -# #### PATCH CSTOR Volume CR #### -if [[ "$cstor_volume_version" != "$target_upgrade_version" ]]; then - echo "Upgrading cstor volume CR to $target_upgrade_version" - kubectl patch cstorvolume --namespace $ns $c_vol -p "$(cat cstor-volume-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstor volumes CR $c_vol | Exit code: $rc"; exit; fi -else - echo "CStor volume CR $c_vol is already at $target_upgrade_version" -fi - -# #### PATCH CSTOR Volume Replica CR #### - -for replica in $c_replicas -do - if [[ "`kubectl get cvr $replica -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'`" != "$target_upgrade_version" ]]; then - echo "Upgrading cstor volume replica $replica to $target_upgrade_version" - kubectl patch cvr $replica --namespace $ns -p "$(cat cstor-volume-replica-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch CstorVolumeReplica $replica | Exit code: $rc"; exit; fi - echo "Successfully updated replica: $replica" - else - echo "cstor replica $replica is already at $target_upgrade_version" - fi -done - -echo "Clearing temporary files" -rm cstor-target-patch.json -rm cstor-target-svc-patch.json -rm cstor-volume-patch.json -rm cstor-volume-replica-patch.json - -echo "Successfully upgraded $pv to $target_upgrade_version Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.8.0-0.8.1/jiva-replica-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/jiva-replica-patch.tpl.json deleted file mode 100644 index 9905041db6..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/jiva-replica-patch.tpl.json +++ /dev/null @@ -1,30 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - } - }, - "spec": { - "containers": [ - { - "name": "@r_name@", - "image": "quay.io/openebs/jiva:@target_version@" - } - ], - "nodeSelector": { - "openebs-pv-@pv-name@": "@replica_node_label@" - } - } - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/jiva-target-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/jiva-target-patch.tpl.json deleted file mode 100644 index 75a6ac5710..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/jiva-target-patch.tpl.json +++ /dev/null @@ -1,31 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - } - }, - "spec": { - "containers": [ - { - "name": "@c_name@", - "image": "quay.io/openebs/jiva:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - } - ] - } - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/jiva-target-svc-patch.tpl.json b/k8s/upgrades/0.8.0-0.8.1/jiva-target-svc-patch.tpl.json deleted file mode 100644 index 26c8b45cdb..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/jiva-target-svc-patch.tpl.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@" - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/jiva_volume_upgrade.sh b/k8s/upgrades/0.8.0-0.8.1/jiva_volume_upgrade.sh deleted file mode 100755 index 708b4249d1..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/jiva_volume_upgrade.sh +++ /dev/null @@ -1,218 +0,0 @@ -#!/usr/bin/env bash -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -target_upgrade_version="0.8.1" -current_version="0.8.0" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -function setDeploymentRecreateStrategy() { - dns=$1 # deployment namespace - dn=$2 # deployment name - currStrategy=`kubectl get deploy -n $dns $dn -o jsonpath="{.spec.strategy.type}"` - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get the deployment stratergy for $dn | Exit code: $rc"; exit; fi - - if [ $currStrategy != "Recreate" ]; then - kubectl patch deployment --namespace $dns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $dn | Exit code: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 -replica_node_label="openebs-jiva" - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found";exit 1; -fi - -# Check if CASType is jiva -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}"` -if [ $cas_type != "jiva" ]; then - echo "Jiva volume not found";exit 1; -fi - -ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -sc_name=`kubectl get pv $pv -o jsonpath="{.spec.storageClassName}"` -sc_res_ver=`kubectl get sc $sc_name -n $ns -o jsonpath="{.metadata.resourceVersion}"` - -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc" # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -################################################################# - -c_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/controller=jiva-controller -o jsonpath="{.items[*].metadata.name}") -r_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/replica=jiva-replica -o jsonpath="{.items[*].metadata.name}") -c_svc=$(kubectl get svc -n $ns -l openebs.io/persistent-volume=$pv -o jsonpath="{.items[*].metadata.name}") -c_name=$(kubectl get deploy -n $ns $c_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "con") -r_name=$(kubectl get deploy -n $ns $r_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "con") - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/controller=jiva-controller | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/replica=jiva-replica | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -# Check if openebs resources exist and provisioned version is 0.8 - -if [[ -z $c_rs ]]; then - echo "Target Replica set not found"; exit 1; -fi - -if [[ -z $r_rs ]]; then - echo "Replica Replica set not found"; exit 1; -fi - -if [[ -z $c_dep ]]; then - echo "Target deployment not found"; exit 1; -fi - -if [[ -z $r_dep ]]; then - echo "Replica deployment not found"; exit 1; -fi - -if [[ -z $c_svc ]]; then - echo "Target service not found"; exit 1; -fi - -if [[ -z $r_name ]]; then - echo "Replica container not found"; exit 1; -fi - -if [[ -z $c_name ]]; then - echo "Target container not found"; exit 1; -fi - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" != "$current_version" ]] && [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Current Target deployment $c_dep version is not $current_version or $target_upgrade_version";exit 1; -fi -replica_version=`kubectl get deployment $r_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$replica_version" != "$current_version" ]] && [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Current Replica deployment $r_dep version is not $current_version or $target_upgrade_version";exit 1; -fi - -controller_svc_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_svc_version" != $current_version ]] && [[ "$controller_svc_version" != "$target_upgrade_version" ]] ; then - echo "Current Target service $c_svc version is not $current_version or $target_upgrade_version";exit 1; -fi - -# Get the number of replicas configured. -# This field is currently not used, but can add additional validations -# based on the nodes and expected number of replicas -rep_count=`kubectl get deploy $r_dep --namespace $ns -o jsonpath="{.spec.replicas}"` - -# Get the list of nodes where replica pods are running, delimited by ':' -rep_nodenames=`kubectl get pods -n $ns \ - -l "openebs.io/persistent-volume=$pv" -l "openebs.io/replica=jiva-replica" \ - -o jsonpath="{range .items[*]}{@.spec.nodeName}:{end}"` - -echo "Checking if the node with replica pod has been labeled with $replica_node_label" -for rep_node in `echo $rep_nodenames | tr ":" " "`; do - nl="";nl=`kubectl get nodes $rep_node -o jsonpath="{.metadata.labels.openebs-pv-$pv}"` - if [ -z "$nl" ]; - then - echo "Labeling $rep_node"; - kubectl label node $rep_node "openebs-pv-${pv}=$replica_node_label" - fi -done - - -sed "s/@sc_name@/$sc_name/g" jiva-replica-patch.tpl.json | sed -u "s/@sc_resource_version@/$sc_res_ver/g" | sed -u "s/@replica_node_label@/$replica_node_label/g" | sed -u "s/@r_name@/$r_name/g" | sed -u "s/@pv-name@/$pv/g" | sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-replica-patch.json -sed "s/@sc_name@/$sc_name/g" jiva-target-patch.tpl.json | sed -u "s/@sc_resource_version@/$sc_res_ver/g" | sed -u "s/@c_name@/$c_name/g" | sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-target-patch.json -sed "s/@sc_name@/$sc_name/g" jiva-target-svc-patch.tpl.json | sed -u "s/@sc_resource_version@/$sc_res_ver/g" | sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-target-svc-patch.json - -################################################################################# -# STEP: Patch OpenEBS volume deployments (jiva-target, jiva-replica & jiva-svc) # -################################################################################# - -# PATCH JIVA REPLICA DEPLOYMENT #### -if [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Replica Deployment to $target_upgrade_version" - - # Setting the update stratergy to recreate - setDeploymentRecreateStrategy $ns $r_dep - - kubectl patch deployment --namespace $ns $r_dep -p "$(cat jiva-replica-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $r_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $r_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete ReplicaSet $r_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " RollOut for $r_dep failed | Exit code: $rc"; exit; fi -else - echo "Replica Deployment $r_dep is already at $target_upgrade_version" -fi - -# #### PATCH TARGET DEPLOYMENT #### -if [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Deployment to $target_upgrade_version" - - # Setting the update stratergy to recreate - setDeploymentRecreateStrategy $ns $c_dep - - kubectl patch deployment --namespace $ns $c_dep -p "$(cat jiva-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " Failed to patch the deployment | Exit code: $rc"; exit; fi -else - echo "Controller Deployment $c_dep is already at $target_upgrade_version" - -fi - -# #### PATCH TARGET SERVICE #### -if [[ "$controller_svc_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Service to $target_upgrade_version" - kubectl patch service --namespace $ns $c_svc -p "$(cat jiva-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the service $svc | Exit code: $rc"; exit; fi -else - echo "Controller service $c_svc is already at $target_upgrade_version" -fi - -echo "Clearing temporary files" -rm jiva-replica-patch.json -rm jiva-target-patch.json -rm jiva-target-svc-patch.json - -echo "Successfully upgraded $pv to $target_upgrade_version Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.8.0-0.8.1/labeltagger.sh b/k8s/upgrades/0.8.0-0.8.1/labeltagger.sh deleted file mode 100755 index 02409b90a9..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/labeltagger.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env bash -##################################################################### -# NOTES: This script finds unlabeled volume resources of openebs # -##################################################################### - -function usage() { - echo - echo "Usage: This script adds openebs.io/version label to unlabeled volume resources of openebs" - echo - echo "$0 " - echo - echo "Example: $0 0.8.0" - exit 1 -} - -if [ "$#" -ne 1 ]; then - usage -fi - -currentVersion=$1 -echo $currentVersion - -echo "#!/usr/bin/env bash" > label.sh -echo "set -e" >> label.sh - -echo "##### Creating the tag script #####" -# Adding cstor resources -kubectl get cstorvolume --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get cstorvolumereplicas --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/target-service in (cstor-target-svc)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/target in (cstor-target)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh - -# Adding jiva resources -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/controller-service in (jiva-controller-svc)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/replica in (replica)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/controller in (jiva-controller)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh - -# Adding pool resources -kubectl get csp -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), app in (cstor-pool)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get sp -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/cas-type in (cstor)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion --overwrite=true;{end}" | tr ";" "\n" >> label.sh - -# Running the label.sh -chmod +x ./label.sh -./label.sh - -# Removing the generated script -rm label.sh \ No newline at end of file diff --git a/k8s/upgrades/0.8.0-0.8.1/patch-strategy-recreate.json b/k8s/upgrades/0.8.0-0.8.1/patch-strategy-recreate.json deleted file mode 100644 index 8c6c5c60af..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/patch-strategy-recreate.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/strategy/rollingUpdate" }, - { "op": "replace", "path": "/spec/strategy/type", "value": "Recreate" } -] diff --git a/k8s/upgrades/0.8.0-0.8.1/pre-check.sh b/k8s/upgrades/0.8.0-0.8.1/pre-check.sh deleted file mode 100755 index 5f2126c671..0000000000 --- a/k8s/upgrades/0.8.0-0.8.1/pre-check.sh +++ /dev/null @@ -1,71 +0,0 @@ -#!/usr/bin/env bash -##################################################################### -# NOTES: This script finds unlabeled volume resources of openebs # -##################################################################### - -# Search of CStor Resources -printf "############## Unlabeled CStor Volumes Resources ##############\n\n" - -printf "CStor Volumes:\n" -echo "--------------" -printf "\n" -# Search for CStor Volumes -kubectl get cstorvolume --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0)' - -printf "\nCStor Volumes Replicas:\n" -echo "-----------------------" -printf "\n" -# Search for CStor Volume Replicas -kubectl get cstorvolumereplicas --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0)' - -printf "\nCStor Target service:\n" -echo "---------------------" -printf "\n" -# Search for CStor Target Service -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/target-service in (cstor-target-svc)' - -printf "\nCStor Target Deployment:\n" -echo "---------------------" -printf "\n" -# Search for CStor Target Service -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/target in (cstor-target)' - - -printf "\n\n############## unlabeled Jiva Volumes Resources ##############\n\n" - -printf "\nJiva Controller service:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Controller Services -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/controller-service in (jiva-controller-svc)' - -printf "\nJiva Replica Deployment:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Replica Deployment -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/replica in (replica)' - -printf "\nJiva Controller Deployment:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Controller Deployment -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/controller in (jiva-controller)' - -printf "\n\n############## Storage Pool Resources ##############\n\n" - -printf "\nCStor Pool:\n" -echo "-----------" -printf "\n" -kubectl get csp -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0)' - -printf "\nCStor Pool Deployments:\n" -echo "-----------------------" -printf "\n" -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), app in (cstor-pool)' - -printf "\nStorge Pool:\n" -echo "------------" -printf "\n" -kubectl get sp -l 'openebs.io/version notin (0.8.1), openebs.io/version notin (0.8.0), openebs.io/cas-type in (cstor)' - -printf "Note: The unlabeled resources can be tagged with correct version of openebs using labeltagger.sh.\n Example: ./labeltagger.sh 0.8.0" \ No newline at end of file diff --git a/k8s/upgrades/0.8.1-0.8.2/README.md b/k8s/upgrades/0.8.1-0.8.2/README.md deleted file mode 100644 index afb1b4685e..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/README.md +++ /dev/null @@ -1,129 +0,0 @@ -# UPGRADE FROM OPENEBS 0.8.1 TO 0.8.2 - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.8.1 to 0.8.2 - -The upgrade of OpenEBS is a three step process: -- *Step 1* - Checking the openebs version labels -- *Step 2* - Upgrade the OpenEBS Operator -- *Step 3* - Upgrade the OpenEBS Volumes from previous versions (0.8.1) - -#### Note: It is mandatory to make sure to that all volumes are running at version 0.8.1 before the upgrade. - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: Storage Engine pods like cStor or Jiva controller(aka target) & replica pods* - -## Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -### Download the upgrade scripts - -The easiest way to get all the upgrade scripts is via git clone. - -``` -mkdir upgrade-openebs -cd upgrade-openebs -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/upgrades/0.8.1-0.8.2/ -``` - -## Step 1: Checking the OpenEBS current version. - -#### Please make sure that current OpenEBS version is 0.8.1 before proceeding to -step 2. - -## Step 2: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments - -The upgrade steps vary depending on the way OpenEBS was installed, select one of the following: - -#### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you will have to download the 0.8.2 openebs-operator.yaml and customize it again** - -``` -#Upgrade to 0.8.2 OpenEBS Operator -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.2.yaml -``` - -#### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -Before upgrading using helm, please review the default values available with latest stable/openebs chart. (https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to update OpenEBS. [More](https://hub.helm.sh/charts/stable/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values stable/openebs --version 0.8.6 - ``` -- If not, customize the values into your copy (say custom-values.yaml), by copying the content from above default yamls and edit the values to suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade stable/openebs --version 0.8.6 -f custom-values.yaml` - ``` - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.8.2 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.8.2: -- openebs-charts [PR####](https://github.com/openebs/openebs/pull/2352) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 3: Upgrade the OpenEBS Pools and Volumes - -Even after the OpenEBS Operator has been upgraded to 0.8.2, the cStor Storage Pools and volumes (both jiva and cStor) will continue to work with older versions. Use the following steps in the same order to upgrade cStor Pools and volumes. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/readonlyvolumes.html -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -### Upgrade the Jiva based OpenEBS PV - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -``` -./jiva_volume_upgrade.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -``` -NAME AGE -cstor-sparse-pool 24m -``` - -``` -./cstor_pool_upgrade.sh cstor-sparse-pool openebs -``` -Make sure that this step completes successfully before proceeding to next step. - - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -``` - -``` -./cstor_volume_upgrade.sh pvc-1085415d-f84c-11e8-aadf-42010a8000bb openebs -``` diff --git a/k8s/upgrades/0.8.1-0.8.2/cr-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/cr-patch.tpl.json deleted file mode 100644 index 2a01f8a4f8..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cr-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/cstor-pool-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/cstor-pool-patch.tpl.json deleted file mode 100644 index 752c381704..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cstor-pool-patch.tpl.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "cstor-pool", - "image": "quay.io/openebs/cstor-pool:@pool_version@" - }, - { - "name": "cstor-pool-mgmt", - "image": "quay.io/openebs/cstor-pool-mgmt:@pool_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/cstor-target-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/cstor-target-patch.tpl.json deleted file mode 100644 index 080acb8deb..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cstor-target-patch.tpl.json +++ /dev/null @@ -1,27 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "cstor-istgt", - "image": "quay.io/openebs/cstor-istgt:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - }, - { - "name": "cstor-volume-mgmt", - "image": "quay.io/openebs/cstor-volume-mgmt:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/cstor-target-svc-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/cstor-target-svc-patch.tpl.json deleted file mode 100644 index fab25e0dec..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cstor-target-svc-patch.tpl.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/pvc-namespace":"@pvc-namespace@", - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@", - "openebs.io/persistent-volume-claim":"@pvc-name@" - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/cstor-volume-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/cstor-volume-patch.tpl.json deleted file mode 100644 index c39df1ba91..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cstor-volume-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/cstor-volume-replica-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/cstor-volume-replica-patch.tpl.json deleted file mode 100644 index 8d84c73f9a..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cstor-volume-replica-patch.tpl.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "metadata": { - "finalizers": [], - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/cstor_pool_upgrade.sh b/k8s/upgrades/0.8.1-0.8.2/cstor_pool_upgrade.sh deleted file mode 100755 index a2bc4678be..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cstor_pool_upgrade.sh +++ /dev/null @@ -1,143 +0,0 @@ -#!/usr/bin/env bash - -########################################################################### -# STEP: Get SPC name and namespace where OpenEBS is deployed as arguments # -# # -# NOTES: Obtain the pool deployments to perform upgrade operation # -########################################################################### - -pool_upgrade_version="0.8.2" -current_version="0.8.1" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the SPC name using: kubectl get spc" - echo " Get the namespace where pool pods" - echo " corresponding to SPC are deployed" - exit 1 -} - -##Checking the version of OpenEBS #### -function verify_openebs_version() { - local resource=$1 - local name_res=$2 - local openebs_version=$(kubectl get $resource $name_res -n $ns \ - -o jsonpath="{.metadata.labels.openebs\.io/version}") - - if [[ $openebs_version != $current_version ]] && [[ $openebs_version != $pool_upgrade_version ]]; then - echo "Expected version of $name_res in $resource is $current_version but got $openebs_version";exit 1; - fi - echo $openebs_version -} - -## Starting point -if [ "$#" -ne 2 ]; then - usage -fi - -spc=$1 -ns=$2 - -### Get the csp list which are related to the given spc ### -csp_list=$(kubectl get csp -l openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get csp related to spc $spc" - exit 1 -fi - -################################################################ -# STEP: Update patch files with pool upgrade version # -# # -################################################################ - -sed "s/@pool_version@/$pool_upgrade_version/g" cr-patch.tpl.json > cr_patch.json - -echo "Patching the csp resource" -for csp in `echo $csp_list | tr ":" " "`; do - version=$(verify_openebs_version "csp" $csp) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - ## Patching the csp resource - kubectl patch csp $csp -p "$(cat cr_patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Error occurred while upgrading the csp: $csp Exit Code: $rc"; exit; fi -done - -echo "Patching Pool Deployment with new image" -for csp in `echo $csp_list | tr ":" " "`; do - ## Get the pool deployment corresponding to csp - pool_dep=$(kubectl get deploy -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{.items[?(@.metadata.labels.openebs\.io/cstor-pool=='$csp')].metadata.name}") - - version=$(verify_openebs_version "deploy" $pool_dep) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - - ## Get the replica set corresponding to the deployment ## - pool_rs=$(kubectl get rs -n $ns \ - -o jsonpath="{range .items[?(@.metadata.ownerReferences[0].name=='$pool_dep')]}{@.metadata.name}{end}") - echo "$pool_dep -> rs is $pool_rs" - - - ## Modifies the cstor-pool-patch template with the original values ## - sed "s/@pool_version@/$pool_upgrade_version/g" cstor-pool-patch.tpl.json > cstor-pool-patch.json - - ## Patch the deployment file ### - kubectl patch deployment --namespace $ns $pool_dep -p "$(cat cstor-pool-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: Failed to patch $pool_dep $rc"; exit; fi - rollout_status=$(kubectl rollout status --namespace $ns deployment/$pool_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo "ERROR: Failed to rollout status for $pool_dep error: $rc"; exit; fi - - ## Deleting the old replica set corresponding to deployment - kubectl delete rs $pool_rs --namespace $ns - - ## Cleaning the temporary patch file - rm cstor-pool-patch.json -done - -### Get the sp list which are related to the given spc ### -sp_list=$(kubectl get sp -l openebs.io/cas-type=cstor,openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get sp related to spc $spc" - exit 1 -fi - -### Patch sp resource### -echo "Patching the SP resource" -for sp in `echo $sp_list | tr ":" " "`; do - version=$(verify_openebs_version "sp" $sp) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - kubectl patch sp $sp -p "$(cat cr_patch.json)" --type=merge - rc=$? - if [ $rc -ne 0 ]; then echo "Error: failed to patch for SP resource $sp Exit Code: $rc"; exit; fi -done - -###Cleaning temporary patch file -rm cr_patch.json - -echo "Successfully upgraded $spc to $pool_upgrade_version" -echo "Running post pool upgrade scripts for $spc..." - -exit 0 diff --git a/k8s/upgrades/0.8.1-0.8.2/cstor_volume_upgrade.sh b/k8s/upgrades/0.8.1-0.8.2/cstor_volume_upgrade.sh deleted file mode 100755 index a1032ea9a8..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/cstor_volume_upgrade.sh +++ /dev/null @@ -1,193 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ -target_upgrade_version="0.8.2" -current_version="0.8.1" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - echo " Get the namespace where openebs" - echo " pods are installed" - exit 1 -} - -if [ "$#" -ne 2 ]; then - usage -fi - -pv=$1 -ns=$2 - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found";exit 1; -fi - -# Check if CASType is cstor -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}"` -if [ $cas_type != "cstor" ]; then - echo "Cstor volume not found";exit 1; -elif [ $cas_type == "cstor" ]; then - echo "$pv is a cstor volume" -else - echo "Volume is neither cstor or cstor"; exit 1; -fi - -sc_ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -sc_name=`kubectl get pv $pv -o jsonpath="{.spec.storageClassName}"` -sc_res_ver=`kubectl get sc $sc_name -n $sc_ns -o jsonpath="{.metadata.resourceVersion}"` -pvc_name=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` -pvc_namespace=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc", # -# # -# c-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-target # -################################################################# - -c_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/target=cstor-target -o jsonpath="{.items[*].metadata.name}") -c_svc=$(kubectl get svc -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/target-service=cstor-target-svc -o jsonpath="{.items[*].metadata.name}") -c_vol=$(kubectl get cstorvolumes -l openebs.io/persistent-volume=$pv -n $ns -o jsonpath="{.items[*].metadata.name}") -c_replicas=$(kubectl get cvr -n openebs -l openebs.io/persistent-volume=$pv -o jsonpath="{range .items[*]}{@.metadata.name};{end}" | tr ";" "\n") - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs=$(kubectl get rs -n $ns -o name -l openebs.io/persistent-volume=$pv | cut -d '/' -f 2) - - -# Check if openebs resources exist and provisioned version is 0.8 - -if [[ -z $c_rs ]]; then - echo "Target Replica set not found"; exit 1; -fi - -if [[ -z $c_dep ]]; then - echo "Target deployment not found"; exit 1; -fi - -if [[ -z $c_svc ]]; then - echo "Target svc not found";exit 1; -fi - -if [[ -z $c_vol ]]; then - echo "CstorVolumes CR not found"; exit 1; -fi - -if [[ -z $c_replicas ]]; then - echo "Cstor Volume Replica CR not found"; exit 1; -fi - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" != "$current_version" ]] && [[ "$controller_version" != "$target_upgrade_version" ]] ; then - echo "Current cstor target deloyment $c_dep version is not $current_version or $target_upgrade_version";exit 1; -fi - -controller_service_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_service_version" != "$current_version" ]] && [[ "$controller_service_version" != "$target_upgrade_version" ]]; then - echo "Current cstor target service $c_svc version is not $current_version or $target_upgrade_version";exit 1; -fi - -cstor_volume_version=`kubectl get cstorvolumes $c_vol -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$cstor_volume_version" != "$current_version" ]] && [[ "$cstor_volume_version" != "$target_upgrade_version" ]]; then - echo "Current cstor volume $c_vol version is not $current_version or $target_upgrade_version";exit 1; -fi - -for replica in $c_replicas -do - replica_version=`kubectl get cvr $replica -n openebs -o jsonpath='{.metadata.labels.openebs\.io/version}'` - if [[ "$replica_version" != "$current_version" ]] && [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "CStor volume replica $replica version is not $current_version"; exit 1; - fi -done - - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -sed "s/@target_version@/$target_upgrade_version/g" cstor-target-patch.tpl.json > cstor-target-patch.json -sed "s/@sc_name@/$sc_name/g" cstor-target-svc-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@target_version@/$target_upgrade_version/g" | sed "s/@pvc-name@/$pvc_name/g" | sed "s/@pvc-namespace@/$pvc_namespace/g" > cstor-target-svc-patch.json -sed "s/@target_version@/$target_upgrade_version/g" cstor-volume-patch.tpl.json > cstor-volume-patch.json -sed "s/@target_version@/$target_upgrade_version/g" cstor-volume-replica-patch.tpl.json> cstor-volume-replica-patch.json - -################################################################################# -# STEP: Patch OpenEBS volume deployments (cstor-target, cstor-svc) # -################################################################################# - - -# #### PATCH TARGET DEPLOYMENT #### - -if [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Deployment to $target_upgrade_version" - - kubectl patch deployment --namespace $ns $c_dep -p "$(cat cstor-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstor target deployment $c_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete cstor replica set $c_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo "Failed to rollout for deployment $c_dep | Exit code: $rc"; exit; fi -else - echo "Target deployment $c_dep is already at $target_upgrade_version" -fi - -# #### PATCH TARGET SERVICE #### -if [[ "$controller_service_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Service to $target_upgrade_version" - kubectl patch service --namespace $ns $c_svc -p "$(cat cstor-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch service $svc | Exit code: $rc"; exit; fi -else - echo "Target service $c_svc is already at $target_upgrade_version" -fi - -# #### PATCH CSTOR Volume CR #### -if [[ "$cstor_volume_version" != "$target_upgrade_version" ]]; then - echo "Upgrading cstor volume CR to $target_upgrade_version" - kubectl patch cstorvolume --namespace $ns $c_vol -p "$(cat cstor-volume-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstor volumes CR $c_vol | Exit code: $rc"; exit; fi -else - echo "CStor volume CR $c_vol is already at $target_upgrade_version" -fi - -# #### PATCH CSTOR Volume Replica CR #### - -for replica in $c_replicas -do - if [[ "`kubectl get cvr $replica -n openebs -o jsonpath='{.metadata.labels.openebs\.io/version}'`" != "$target_upgrade_version" ]]; then - echo "Upgrading cstor volume replica $replica to $target_upgrade_version" - kubectl patch cvr $replica --namespace openebs -p "$(cat cstor-volume-replica-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch CstorVolumeReplica $replica | Exit code: $rc"; exit; fi - echo "Successfully updated replica: $replica" - else - echo "cstor replica $replica is already at $target_upgrade_version" - fi -done - -echo "Clearing temporary files" -rm cstor-target-patch.json -rm cstor-target-svc-patch.json -rm cstor-volume-patch.json -rm cstor-volume-replica-patch.json - -echo "Successfully upgraded $pv to $target_upgrade_version Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.8.1-0.8.2/jiva-replica-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/jiva-replica-patch.tpl.json deleted file mode 100644 index 7260b47221..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/jiva-replica-patch.tpl.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "@r_name@", - "image": "quay.io/openebs/jiva:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/jiva-target-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/jiva-target-patch.tpl.json deleted file mode 100644 index cc92bb8210..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/jiva-target-patch.tpl.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "@c_name@", - "image": "quay.io/openebs/jiva:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/jiva-target-svc-patch.tpl.json b/k8s/upgrades/0.8.1-0.8.2/jiva-target-svc-patch.tpl.json deleted file mode 100644 index c39df1ba91..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/jiva-target-svc-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/0.8.1-0.8.2/jiva_volume_upgrade.sh b/k8s/upgrades/0.8.1-0.8.2/jiva_volume_upgrade.sh deleted file mode 100755 index c754d62bcf..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/jiva_volume_upgrade.sh +++ /dev/null @@ -1,201 +0,0 @@ -#!/usr/bin/env bash -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -target_upgrade_version="0.8.2" -current_version="0.8.1" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 -replica_node_label="openebs-jiva" - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found";exit 1; -fi - -# Check if CASType is jiva -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}"` -if [ $cas_type != "jiva" ]; then - echo "Jiva volume not found";exit 1; -elif [ $cas_type == "jiva" ]; then - echo "$pv is a jiva volume" -else - echo "Volume is neither jiva or jiva";exit 1; -fi - -ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -sc_name=`kubectl get pv $pv -o jsonpath="{.spec.storageClassName}"` -sc_res_ver=`kubectl get sc $sc_name -n $ns -o jsonpath="{.metadata.resourceVersion}"` - -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc" # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -################################################################# - -c_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/controller=jiva-controller -o jsonpath="{.items[*].metadata.name}") -r_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/replica=jiva-replica -o jsonpath="{.items[*].metadata.name}") -c_svc=$(kubectl get svc -n $ns -l openebs.io/persistent-volume=$pv -o jsonpath="{.items[*].metadata.name}") -c_name=$(kubectl get deploy -n $ns $c_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "ctrl-con") -r_name=$(kubectl get deploy -n $ns $r_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "rep-con") - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/controller=jiva-controller | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/replica=jiva-replica | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -# Check if openebs resources exist and provisioned version is 0.8 - -if [[ -z $c_rs ]]; then - echo "Target Replica set not found"; exit 1; -fi - -if [[ -z $r_rs ]]; then - echo "Replica Replica set not found"; exit 1; -fi - -if [[ -z $c_dep ]]; then - echo "Target deployment not found"; exit 1; -fi - -if [[ -z $r_dep ]]; then - echo "Replica deployment not found"; exit 1; -fi - -if [[ -z $c_svc ]]; then - echo "Target service not found"; exit 1; -fi - -if [[ -z $r_name ]]; then - echo "Replica container not found"; exit 1; -fi - -if [[ -z $c_name ]]; then - echo "Target container not found"; exit 1; -fi - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" != "$current_version" ]] && [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Current Target deployment $c_dep version is not $current_version or $target_upgrade_version";exit 1; -fi -replica_version=`kubectl get deployment $r_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$replica_version" != "$current_version" ]] && [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Current Replica deployment $r_dep version is not $current_version or $target_upgrade_version";exit 1; -fi - -controller_svc_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_svc_version" != $current_version ]] && [[ "$controller_svc_version" != "$target_upgrade_version" ]] ; then - echo "Current Target service $c_svc version is not $current_version or $target_upgrade_version";exit 1; -fi - -# Get the number of replicas configured. -# This field is currently not used, but can add additional validations -# based on the nodes and expected number of replicas -rep_count=`kubectl get deploy $r_dep --namespace $ns -o jsonpath="{.spec.replicas}"` - -# Get the list of nodes where replica pods are running, delimited by ':' -rep_nodenames=`kubectl get pods -n $ns \ - -l "openebs.io/persistent-volume=$pv" -l "openebs.io/replica=jiva-replica" \ - -o jsonpath="{range .items[*]}{@.spec.nodeName}:{end}"` - -echo "Checking if the node with replica pod has been labeled with $replica_node_label" -for rep_node in `echo $rep_nodenames | tr ":" " "`; do - nl="";nl=`kubectl get nodes $rep_node -o jsonpath="{.metadata.labels.openebs-pv-$pv}"` - if [ -z "$nl" ]; - then - echo "Labeling $rep_node"; - kubectl label node $rep_node "openebs-pv-${pv}=$replica_node_label" - fi -done - - -sed -u "s/@r_name@/$r_name/g" jiva-replica-patch.tpl.json | sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-replica-patch.json -sed -u "s/@c_name@/$c_name/g" jiva-target-patch.tpl.json | sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-target-patch.json -sed -u "s/@target_version@/$target_upgrade_version/g" jiva-target-svc-patch.tpl.json > jiva-target-svc-patch.json - -################################################################################# -# STEP: Patch OpenEBS volume deployments (jiva-target, jiva-replica & jiva-svc) # -################################################################################# - -# PATCH JIVA REPLICA DEPLOYMENT #### -if [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Replica Deployment to $target_upgrade_version" - - kubectl patch deployment --namespace $ns $r_dep -p "$(cat jiva-replica-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $r_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $r_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete ReplicaSet $r_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " RollOut for $r_dep failed | Exit code: $rc"; exit; fi -else - echo "Replica Deployment $r_dep is already at $target_upgrade_version" -fi - -# #### PATCH TARGET DEPLOYMENT #### -if [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Deployment to $target_upgrade_version" - - kubectl patch deployment --namespace $ns $c_dep -p "$(cat jiva-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " Failed to patch the deployment | Exit code: $rc"; exit; fi -else - echo "Controller Deployment $c_dep is already at $target_upgrade_version" - -fi - -# #### PATCH TARGET SERVICE #### -if [[ "$controller_svc_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Service to $target_upgrade_version" - kubectl patch service --namespace $ns $c_svc -p "$(cat jiva-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the service $svc | Exit code: $rc"; exit; fi -else - echo "Controller service $c_svc is already at $target_upgrade_version" -fi - -echo "Clearing temporary files" -rm jiva-replica-patch.json -rm jiva-target-patch.json -rm jiva-target-svc-patch.json - -echo "Successfully upgraded $pv to $target_upgrade_version Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/0.8.1-0.8.2/patch-strategy-recreate.json b/k8s/upgrades/0.8.1-0.8.2/patch-strategy-recreate.json deleted file mode 100644 index 8c6c5c60af..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/patch-strategy-recreate.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/strategy/rollingUpdate" }, - { "op": "replace", "path": "/spec/strategy/type", "value": "Recreate" } -] diff --git a/k8s/upgrades/0.8.1-0.8.2/pre-check.sh b/k8s/upgrades/0.8.1-0.8.2/pre-check.sh deleted file mode 100755 index b4dbadc710..0000000000 --- a/k8s/upgrades/0.8.1-0.8.2/pre-check.sh +++ /dev/null @@ -1,71 +0,0 @@ -#!/usr/bin/env bash -##################################################################### -# NOTES: This script finds unlabeled volume resources of openebs # -##################################################################### - -# Search of CStor Resources -printf "############## Unlabeled CStor Volumes Resources ##############\n\n" - -printf "CStor Volumes:\n" -echo "--------------" -printf "\n" -# Search for CStor Volumes -kubectl get cstorvolume --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' - -printf "\nCStor Volumes Replicas:\n" -echo "-----------------------" -printf "\n" -# Search for CStor Volume Replicas -kubectl get cstorvolumereplicas --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' - -printf "\nCStor Target service:\n" -echo "---------------------" -printf "\n" -# Search for CStor Target Service -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/target-service in (cstor-target-svc)' - -printf "\nCStor Target Deployment:\n" -echo "---------------------" -printf "\n" -# Search for CStor Target Deployment -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/target in (cstor-target)' - - -printf "\n\n############## unlabeled Jiva Volumes Resources ##############\n\n" - -printf "\nJiva Controller service:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Controller Services -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/controller-service in (jiva-controller-svc)' - -printf "\nJiva Replica Deployment:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Replica Deployment -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/replica in (jiva-replica)' - -printf "\nJiva Controller Deployment:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Controller Deployment -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/controller in (jiva-controller)' - -printf "\n\n############## Storage Pool Resources ##############\n\n" - -printf "\nCStor Pool:\n" -echo "-----------" -printf "\n" -kubectl get csp -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' - -printf "\nCStor Pool Deployments:\n" -echo "-----------------------" -printf "\n" -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), app in (cstor-pool)' - -printf "\nStorge Pool:\n" -echo "------------" -printf "\n" -kubectl get sp -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/cas-type in (cstor)' - -printf "Note: The unlabeled resources can be tagged with correct version of openebs using labeltagger.sh.\n Example: ./labeltagger.sh 0.8.1" diff --git a/k8s/upgrades/0.8.2-0.9.0/README.md b/k8s/upgrades/0.8.2-0.9.0/README.md deleted file mode 100644 index 13330ee34c..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/README.md +++ /dev/null @@ -1,139 +0,0 @@ -# UPGRADE FROM OPENEBS 0.8.2 TO 0.9.0 - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.8.2 to 0.9.0 - -The upgrade of OpenEBS is a two step process: -- *Step 1* - Upgrade the OpenEBS Operator -- *Step 2* - Upgrade the OpenEBS Volumes from previous versions (0.8.2) - -#### Note: It is mandatory to make sure to that all volumes are running at version 0.8.2 before the upgrade. - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver, admission-server & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: Storage Engine pods like cStor or Jiva controller(aka target) & replica pods* - -## Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -### Download the upgrade yamls - -The easiest way to get all the upgrade yamls is via git clone. - -``` -mkdir upgrade-openebs -cd upgrade-openebs -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/upgrades/0.8.2-0.9.0/ -``` - -## Step 1: Checking the OpenEBS current version. - -#### Please make sure that current OpenEBS version is 0.8.2 before proceeding to step 2. - -## Step 2: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments - -The upgrade steps vary depending on the way OpenEBS was installed. Select one of the following based on your installation: - -#### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you have to download the 0.9.0 openebs-operator.yaml and customize it again** - -``` -#Upgrade to 0.9.0 OpenEBS Operator -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.9.0.yaml -``` - -#### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -Before upgrading using helm, please review the default values available with latest stable/openebs chart. (https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to update OpenEBS. [More](https://hub.helm.sh/charts/stable/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values stable/openebs --version 0.9.2 - ``` -- If not, customize the values into your copy (say custom-values.yaml), by copying the content from above default yamls and edit the values to suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade stable/openebs --version 0.9.2 -f custom-values.yaml` - ``` - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.9.0 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.9.0: -- openebs-charts [PR####](https://github.com/openebs/openebs/pull/2566) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 3: Upgrade the OpenEBS Pools and Volumes - -Even after the OpenEBS Operator has been upgraded to 0.9.0, the cStor Storage Pools and volumes (both jiva and cStor) will continue to work with older versions. Use the following steps in the same order to upgrade cStor Pools and volumes. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary jobs (done via CASTemplate) only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/troubleshooting.html#recovery-readonly-when-kubelet-is-container -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -# OpenEBS upgrade via CASTemplates from 0.8.2 to 0.9.0 -**NOTE: Upgrade via these CAS Templates is only supported for OpenEBS in version 0.8.2. Trying to upgrade a OpenEBS version other than 0.8.2 to 0.9.0 using these CAS templates can result in undesired behaviours. If you are having any OpenEBS version lower than 0.8.2, first upgrade it to 0.8.2 and then these CAS templates can be used safely for 0.9.0 upgrade.** - -## Upgrade Jiva based volumes - -Make sure your current directory is openebs/k8s/upgrades/0.8.2-0.9.0/ - -### Steps before upgrade: - - Make sure that all pods related to volume are in running state. - - Apply rbac.yaml to manage permission rules `kubectl apply -f rbac.yaml` - - cd jiva - - Apply cr.yaml which installs a custom resource definition for UpgradeResult custom reource. This custom resource is used to capture upgrade related information for success or failure case. - -### Steps For Jiva volume upgrade: - - - Apply jiva_upgrade_runtask.yaml using `kubectl apply` - - Edit volume-upgrade-job.yaml and add the PV names which need to be upgraded. - - After editing volume-upgrade-job.yaml, save it and apply. - - Logs can be seen from the pod which is launched by upgrade job. Do a `kubectl get pod` to find the upgrade job pod and `kubectl logs` command to see the logs. - - `kubectl get upgraderesult -o yaml` can be done to check the status of upgrade of each item. - -## Upgrade cStor based volumes - -Make sure your current directory is openebs/k8s/upgrades/0.8.2-0.9.0/ - -### Steps before upgrade: - - Make sure that all pods related to pool and volume are in running state. - - If cstor volumes are resized manually then make sure that PV is patched with latest size. - - Apply rbac.yaml to manage permission rules `kubectl apply -f rbac.yaml` - - cd cstor - - Apply cr.yaml which installs a custom resource definition for UpgradeResult custom reource. This custom resource is used to capture upgrade related information for success or failure case. - -### Steps For cStor pool upgrade: - - - Apply cstor-pool-update-082-090.yaml - - Edit pool-upgrade-job.yaml and add the cstorpool resource names which need to be upgraded. - - After editing pool-upgrade-job.yaml, save it and apply. - - Logs can be seen from the pod which is launched by upgrade job. Do a `kubectl get pod` to find the upgrade job pod and `kubectl logs` command to see the logs. - - `kubectl get upgraderesult -o yaml` can be done to check the status of upgrade of each item. - -### Steps For cStor volume upgrade: - - - Apply cstor-volume-update-082-090.yaml - - Edit volume-upgrade-job.yaml and add the cstorvolume resource names which need to be upgraded. - - After editing volume-upgrade-job.yaml, save it and apply. - - Logs can be seen from the pod which is launched by upgrade job. Do a `kubectl get pod` to find the upgrade job pod and `kubectl logs` command to see the logs. - - `kubectl get upgraderesult -o yaml` can be done to check the status of upgrade of each item. - -## Post upgrade steps: - - - Delete ServiceAccount, ClusterRole and ClusterRoleBindings that are created for upgrade using -`kubectl delete -f rbac.yaml` from openebs/k8s/upgrades/0.8.2-0.9.0/ directory. diff --git a/k8s/upgrades/0.8.2-0.9.0/cstor/cr.yaml b/k8s/upgrades/0.8.2-0.9.0/cstor/cr.yaml deleted file mode 100644 index 5136df48e3..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/cstor/cr.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: upgraderesults.openebs.io -spec: - group: openebs.io - names: - kind: UpgradeResult - plural: upgraderesults - shortNames: - - uresult - singular: upgraderesult - scope: Namespaced - version: v1alpha1 diff --git a/k8s/upgrades/0.8.2-0.9.0/cstor/cstor-pool-update-082-090.yaml b/k8s/upgrades/0.8.2-0.9.0/cstor/cstor-pool-update-082-090.yaml deleted file mode 100644 index 8f06be480d..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/cstor/cstor-pool-update-082-090.yaml +++ /dev/null @@ -1,871 +0,0 @@ -# CASTemplate cstor-pool-update-082-090 is -# used to upgrade single cstor pool -apiVersion: openebs.io/v1alpha1 -kind: CASTemplate -metadata: - name: cstor-pool-update-082-090 -spec: - defaultConfig: - - name: baseVersion - value: "0.8.2" - - name: targetVersion - value: "0.9.0" - - name: cstorPoolImageTag - value: "0.9.0" - - name: cstorPoolMgmtImageTag - value: "0.9.0" - - name: mExporterImageTag - value: "0.9.0" - - name: successStatus - value: "Success" - - name: failStatus - value: "Fail" - run: - tasks: - # upgrade-cstor-pool-082-090-get-cstorpool fetches the details of - # CStorPool CR. These details are used in other runtask(s) later. - # This should always run. - - upgrade-cstor-pool-082-090-get-cstorpool - - # This runtask gets the storagepoolclaim details - - upgrade-cstor-pool-082-090-get-storagepoolclaim - - # This runtask gets the cstorpool deployment and verifies it. - - upgrade-cstor-pool-082-090-get-deployment - - # This runtask checks whether the pool pod is in running state if not fails - # the upgrade - #- upgrade-cstor-pool-082-090-pre-check-pool-pod-phase - - # This runtask gets the storagepool custom resource and verifies it. - - upgrade-cstor-pool-082-090-get-storagepool - - # Runtask #4 - # This runtask puts result of resource into upgraderesult custom resource - # after getting info via above 3 runtasks. - - upgrade-cstor-pool-082-090-patch-upgrade-result - - # Runtask #5 - # This runtask patches pool deployment with target version. - - upgrade-cstor-pool-082-090-patch-deployment-image - - # Runtask #6 - # This runtask verifies whether the deployment has rolled out successfully or not - # after the patch. - - upgrade-cstor-pool-082-090-patch-deployment-image-status - - # Runtask #7 - # This runtask checks whether pool containers are in target version or not. - - upgrade-cstor-pool-082-090-post-check-patch-deployment-image - - # Runtask #8 - # Tis runtask patches the storagepool custom resource with the - # target version label. - - upgrade-cstor-pool-082-090-patch-sp-version - - # Runtask #9 - # This runtask checks whether the target version label patch - # for storagepool is successful or not. - - upgrade-cstor-pool-082-090-patch-sp-version-post-check - - # Runtask #10 - # Tis runtask patches the cstorpool custom resource with the - # target version label. - - upgrade-cstor-pool-082-090-patch-csp-version - - # Runtask #11 - # This runtask checks whether the target version label patch - # for cstorpool is successful or not. - - upgrade-cstor-pool-082-090-patch-csp-version-post-check - - # Runtask #12 - # Tis runtask patches the cstorpool deployment with the - # target version label. - - upgrade-cstor-pool-082-090-patch-deployment-version - - # Runtask #13 - # This runtask checks whether the target version label patch - # for cstorpool deployment is successful or not. - - upgrade-cstor-pool-082-090-patch-deployment-version-post-check - - # Runtask #14 - # This runtask list all the replicaset of the cstorpool deployment. - - upgrade-cstor-pool-082-090-list-replicaset - - # Runtask #15 - # This runtask list current running pod of the cstorpool - # deployment and help figure out the stale replicasets. - - upgrade-cstor-pool-082-090-list-pod - - # Runtask #16 - # This runtask deletes the stale replicaset of cstorpool - # deployment. - - upgrade-cstor-pool-082-090-delete-replicaset - taskNamespace: default ---- -# Runtask #1 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-get-cstorpool - namespace: default -spec: - meta: | - id: CStorPool - apiVersion: openebs.io/v1alpha1 - kind: CStorPool - action: get - objectName: {{ .UpgradeItem.name }} - post: | - {{- jsonpath .JsonResult "{.metadata.uid}" | trim | saveAs "CStorPool.uid" .TaskResult | noop -}} - {{- .TaskResult.CStorPool.uid | notFoundErr "csp uid not found" | saveIf "CStorPool.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "CStorPool.version" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/storage-pool-claim}" | trim | saveAs "CStorPool.spcName" .TaskResult | noop -}} - {{- .TaskResult.CStorPool.spcName | notFoundErr "spc name not found" | saveIf "CStorPool.notFoundErr" .TaskResult | noop -}} - - {{- $message :="" }} - {{- $status :="" }} - {{- $successMessage := printf "Successfully got details of CStorPool {%s}" .UpgradeItem.name -}} - {{- $errMessageInvalidVersion := printf "CStorPool {%s}, version is not in {%s}" .UpgradeItem.name .Config.baseVersion.value -}} - - {{- $isBaseVersion := eq .TaskResult.CStorPool.version .Config.baseVersion.value }} - {{- $isTargetVersion := eq .TaskResult.CStorPool.version .Config.targetVersion.value }} - {{- if or $isBaseVersion $isTargetVersion -}} - {{- $status =.Config.successStatus.value }} - {{- $message = $successMessage -}} - - {{- else }} - {{- $status =.Config.failStatus.value }} - {{- $message = $errMessageInvalidVersion -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-get-cstorpool" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $errMessageInvalidVersion true | saveAs "CStorPool.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-get-storagepoolclaim - namespace: default -spec: - meta: | - id: StoragePoolClaim - apiVersion: openebs.io/v1alpha1 - kind: StoragePoolClaim - action: get - objectName: {{ .TaskResult.CStorPool.spcName }} - post: | - {{- jsonpath .JsonResult "{.metadata.uid}" | trim | saveAs "StoragePoolClaim.uid" .TaskResult | noop -}} - {{- .TaskResult.StoragePoolClaim.uid | notFoundErr "spc uid not found" | saveIf "StoragePoolClaim.notFoundErr" .TaskResult | noop -}} - - {{- $status := .Config.successStatus.value }} - {{- $message := printf "Successfully got details of StoragePoolClaim {%s}" .TaskResult.CStorPool.spcName -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-get-storagepoolclaim" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #2 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-get-deployment - namespace: default -spec: - meta: | - id: poolDeployment - apiVersion: extensions/v1beta1 - kind: Deployment - action: get - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='cstor-pool')].image}" | trim | saveAs "poolDeployment.cstorpoolimage" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='cstor-pool-mgmt')].image}" | trim | saveAs "poolDeployment.cstorpoolmgmtimage" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "poolDeployment.version" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.replicas}" | trim | saveAs "poolDeployment.replicaCount" .TaskResult | noop -}} - {{- .TaskResult.poolDeployment.replicaCount | notFoundErr "replicas not found for cstor target deployment" | saveIf "poolDeployment.notFoundErr" .TaskResult | noop -}} - - {{- $message :="" }} - {{- $status :="" }} - {{- $successMessage := printf "Successfully got details of CStorPool deployment {%s}." .UpgradeItem.name -}} - {{- $errMessageInvalidVersion := printf "CStorPool deployment {%s}, version is not in {%s}." .UpgradeItem.name .Config.baseVersion.value -}} - - {{- $isBaseVersion := eq .TaskResult.poolDeployment.version .Config.baseVersion.value }} - {{- $isTargetVersion := eq .TaskResult.poolDeployment.version .Config.targetVersion.value }} - {{- if or $isBaseVersion $isTargetVersion -}} - {{- $status =.Config.successStatus.value }} - {{- $message = $successMessage -}} - - {{- else }} - {{- $status =.Config.failStatus.value }} - {{- $message = $errMessageInvalidVersion -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-get-deployment" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $errMessageInvalidVersion true | saveAs "poolDeployment.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-pre-check-pool-pod-phase - namespace: default -spec: - meta: | - id: poolPodList - apiVersion: v1 - kind: Pod - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: app=cstor-pool,openebs.io/storage-pool-claim={{ .TaskResult.CStorPool.spcName }} - post: | - {{- $CustomJsonPath := printf "{.items[?(@.status.phase=='Running')].metadata.name}" -}} - {{- $ErrMsg := printf "No running pods found for csp {%s}" .UpgradeItem.name -}} - - {{- jsonpath .JsonResult $CustomJsonPath | trim | saveAs "poolPodList.podName" .TaskResult | noop -}} - {{- .TaskResult.poolPodList.podName | notFoundErr $ErrMsg | saveIf "poolPodList.notFoundErr" .TaskResult | noop -}} - - {{- .TaskResult.poolPodList.podName | default "" | splitList " " | len | saveAs "poolPodList.actualRunningPodCount" .TaskResult -}} - - {{- $expectedPodCount := .TaskResult.poolDeployment.replicaCount | int -}} - {{- $msg := printf "expected %v no of running replica pod(s), found only %v replica pod(s)" $expectedPodCount .TaskResult.poolPodList.actualRunningPodCount -}} - {{- .TaskResult.poolPodList.podName | default "" | splitList " " | isLen $expectedPodCount | not | verifyErr $msg | saveIf "poolPodList.verifyErr" .TaskResult | noop -}} - - {{- $message := printf "pool pods are in running phase for csp: {%s}" .UpgradeItem.name -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-pre-check-pool-pod-phase" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #3 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-get-storagepool - namespace: default -spec: - meta: | - id: StoragePool - apiVersion: openebs.io/v1alpha1 - kind: StoragePool - action: list - options: |- - labelselector: openebs.io/cstor-pool={{ .UpgradeItem.name }},openebs.io/storage-pool-claim={{ .TaskResult.CStorPool.spcName }} - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "StoragePool.name" .TaskResult | noop -}} - {{- .TaskResult.StoragePool.name | notFoundErr "storagepool name not found" | saveIf "StoragePool.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/version}" | trim | saveAs "StoragePool.labels" .TaskResult | noop -}} - - {{- $message :="" }} - {{- $status :="" }} - {{- $successMessage := printf "Successfully got details of StoragePool {%s}." .UpgradeItem.name -}} - {{- $errMessageInvalidVersion := printf "StoragePool {%s}, version is not in {%s}." .UpgradeItem.name .Config.baseVersion.value -}} - - {{- $isBaseVersion := eq .TaskResult.StoragePool.labels .Config.baseVersion.value }} - {{- $isTargetVersion := eq .TaskResult.StoragePool.labels .Config.targetVersion.value }} - {{- if or $isBaseVersion $isTargetVersion -}} - {{- $status =.Config.successStatus.value }} - {{- $message = $successMessage -}} - - {{- else }} - {{- $status =.Config.failStatus.value }} - {{- $message = $errMessageInvalidVersion -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-get-storagepool" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $errMessageInvalidVersion true | saveAs "StoragePool.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -# Rutask #4 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-upgrade-result - namespace: default -spec: - meta: | - id: patchResult - apiVersion: openebs.io/v1alpha1 - kind: UpgradeResult - action: patch - objectName: {{ .UpgradeItem.upgradeResultName }} - runNamespace: {{ .UpgradeItem.upgradeResultNamespace }} - task: |- - type: merge - pspec: |- - status: - resource: - name: {{ .UpgradeItem.name }} - namespace: {{ .UpgradeItem.namespace }} - kind: {{ .UpgradeItem.kind }} ---- -# Runtask #5 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-deployment-image - namespace: default -spec: - meta: | - {{ $isOldCStorPool := contains .Config.baseVersion.value .TaskResult.poolDeployment.cstorpoolimage }} - {{ $isOldCStorPoolMGMT := contains .Config.baseVersion.value .TaskResult.poolDeployment.cstorpoolmgmtimage }} - {{ $isOldVersion := or $isOldCStorPool $isOldCStorPoolMGMT | toString }} - id: patchDeploymentImage - apiVersion: extensions/v1beta1 - kind: Deployment - action: patch - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - disable: {{ eq $isOldVersion "false" }} - task: |- - type: strategic - pspec: |- - spec: - template: - metadata: - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - openebs.io/cstor-pool: {{ .UpgradeItem.name }} - spec: - containers: - - name: cstor-pool - image: quay.io/openebs/cstor-pool:{{ .Config.cstorPoolImageTag.value }} - env: - - name: OPENEBS_IO_CSTOR_ID - value: {{ .TaskResult.CStorPool.uid }} - livenessProbe: - exec: - command: - - "/bin/sh" - - "-c" - - zfs set io.openebs:livenesstimestap='$(date)' cstor-$OPENEBS_IO_CSTOR_ID - failureThreshold: 3 - initialDelaySeconds: 300 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 30 - - name: cstor-pool-mgmt - image: quay.io/openebs/cstor-pool-mgmt:{{ .Config.cstorPoolMgmtImageTag.value }} - # Setting ports to nil, since it is not required and - # 9500 port is used by m-exporter - ports: - - name: maya-exporter - image: quay.io/openebs/m-exporter:{{ .Config.mExporterImageTag.value }} - command: - - maya-exporter - args: - - "-e=pool" - ports: - - containerPort: 9500 - protocol: TCP - securityContext: - privileged: true - volumeMounts: - - mountPath: /dev - name: device - - mountPath: /tmp - name: tmp - - mountPath: /var/openebs/sparse - name: sparse - - mountPath: /run/udev - name: udev - post: | - {{- $message := printf "Successfully patched deployment {%s}." .UpgradeItem.name -}} - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-deployment-image" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #6 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-deployment-image-status - namespace: default -spec: - meta: | - {{ $isOldCStorPool := contains .TaskResult.poolDeployment.cstorpoolimage .Config.baseVersion.value }} - {{ $isOldCStorPoolMGMT := eq .TaskResult.poolDeployment.cstorpoolmgmtimage "" }} - {{ $isOldVersion := or $isOldCStorPool $isOldCStorPoolMGMT | toString }} - id: patchDeploymentImageStatus - apiVersion: extensions/v1beta1 - kind: Deployment - action: rolloutstatus - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - retry: "20,20s" - post: | - {{- jsonpath .JsonResult "{.isRolledout}" | trim | saveAs "patchDeploymentImageStatus.isRolledout" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.message}" | trim | saveAs "patchDeploymentImageStatus.rolloutStatus" .TaskResult | noop -}} - - {{- $status := "" -}} - {{- $message :="" -}} - {{- $verifyErrMessage := "Pool deployment rollout not successful" -}} - {{- $rolloutStatusMessage := printf "rollout status: {%s} name: {%s} namespace: {%s}" .TaskResult.patchDeploymentImageStatus.rolloutStatus .UpgradeItem.name .UpgradeItem.namespace -}} - - {{- if eq .TaskResult.patchDeploymentImageStatus.isRolledout "true" }} - {{- $status = .Config.successStatus.value -}} - - {{- else }} - {{- "waiting for deployment rollout" | saveAs "patchDeploymentImageStatus.verifyErr" .TaskResult | noop -}} - {{- $status = .Config.failStatus.value -}} - {{- end }} - - {{- $message = $rolloutStatusMessage -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-deployment-image-status" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "patchDeploymentImageStatus.verifyErr" .TaskResult | noop -}} - {{- end }} - ---- -# Runtask #7 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-post-check-patch-deployment-image - namespace: default -spec: - meta: | - {{ $isOldCStorPool := contains .TaskResult.poolDeployment.cstorpoolimage .Config.baseVersion.value }} - {{ $isOldCStorPoolMGMT := eq .TaskResult.poolDeployment.cstorpoolmgmtimage "" }} - {{ $isOldVersion := or $isOldCStorPool $isOldCStorPoolMGMT | toString }} - id: postCheckDeploymentImagePatch - apiVersion: extensions/v1beta1 - kind: Deployment - action: get - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - disable: {{ ne $isOldVersion "true" }} - post: |- - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='cstor-pool')].image}" | trim | saveAs "postCheckDeploymentImagePatch.cstorpoolimage" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='cstor-pool-mgmt')].image}" | trim | saveAs "postCheckDeploymentImagePatch.cstorpoolmgmtimage" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='maya-exporter')].image}" | trim | saveAs "postCheckDeploymentImagePatch.mayaexporterimage" .TaskResult | noop -}} - - {{ $isNewCStorPool := contains .Config.targetVersion.value .TaskResult.postCheckDeploymentImagePatch.cstorpoolimage }} - {{ $isNewCStorPoolMGMT := contains .Config.targetVersion.value .TaskResult.postCheckDeploymentImagePatch.cstorpoolmgmtimage }} - {{ $isNewMayaExporter := contains .Config.targetVersion.value .TaskResult.postCheckDeploymentImagePatch.mayaexporterimage }} - - {{- $status := "" -}} - - {{- if and $isNewCStorPool $isNewCStorPoolMGMT $isNewMayaExporter }} - {{- $status = .Config.successStatus.value -}} - - {{- else }} - {{- $status = .Config.failStatus.value -}} - {{- end }} - - {{- $taskName := "upgrade-cstor-pool-082-090-post-check-patch-deployment-image" -}} - {{- $message := printf "pool image :{%s} pool mgmt image :{%s} maya-exporter image : {%s}" .TaskResult.postCheckDeploymentImagePatch.cstorpoolimage .TaskResult.postCheckDeploymentImagePatch.cstorpoolmgmtimage .TaskResult.postCheckDeploymentImagePatch.mayaexporterimage -}} - ---- -# Runtask #8 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-sp-version - namespace: default -spec: - meta: | - id: patchStoragePool - apiVersion: openebs.io/v1alpha1 - kind: StoragePool - action: patch - objectName: {{ .TaskResult.StoragePool.name }} - task: |- - type: merge - pspec: |- - metadata: - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - ownerReferences: - - apiVersion: openebs.io/v1alpha1 - blockOwnerDeletion: true - controller: true - kind: CStorPool - name: {{ .UpgradeItem.name }} - uid: {{ .TaskResult.CStorPool.uid }} - post: |- - {{- $message := printf "version label successfully patched StoragePool {%s}." .UpgradeItem.name -}} - - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-sp-version" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #9 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-sp-version-post-check - namespace: default -spec: - meta: | - id: postCheckStoragePoolVersionPatch - apiVersion: openebs.io/v1alpha1 - kind: StoragePool - action: get - objectName: {{ .TaskResult.StoragePool.name }} - post: | - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "postCheckStoragePoolVersionPatch.version" .TaskResult | noop -}} - - {{- $status := "" -}} - - {{- if ne .TaskResult.postCheckStoragePoolVersionPatch.version .Config.targetVersion.value }} - {{- $status = .Config.failStatus.value -}} - - {{- else }} - {{- $status = .Config.successStatus.value -}} - {{- end }} - - {{- $message := printf "version label value - {%s}" .TaskResult.postCheckStoragePoolVersionPatch.version -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-sp-version-post-check" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #10 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-csp-version - namespace: default -spec: - meta: | - id: patchCStorPool - apiVersion: openebs.io/v1alpha1 - kind: CStorPool - action: patch - objectName: {{ .UpgradeItem.name }} - task: |- - type: merge - pspec: |- - metadata: - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - ownerReferences: - - apiVersion: openebs.io/v1alpha1 - blockOwnerDeletion: true - controller: true - kind: StoragePoolClaim - name: {{ .TaskResult.CStorPool.spcName }} - uid: {{ .TaskResult.StoragePoolClaim.uid }} - post: |- - {{- $message := printf "version label successfully patched CStorPool {%s}." .UpgradeItem.name -}} - - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-csp-version" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #11 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-csp-version-post-check - namespace: default -spec: - meta: | - id: postCheckCStorPoolVersionPatch - apiVersion: openebs.io/v1alpha1 - kind: CStorPool - action: get - objectName: {{ .UpgradeItem.name }} - post: | - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "postCheckCStorPoolVersionPatch.version" .TaskResult | noop -}} - - {{- $taskName := "upgrade-cstor-pool-082-090-patch-csp-version-post-check" -}} - {{- $status := "" -}} - - {{- if ne .TaskResult.postCheckCStorPoolVersionPatch.version .Config.targetVersion.value }} - {{- $status = .Config.failStatus.value -}} - - {{- else }} - {{- $status = .Config.successStatus.value -}} - {{- end }} - - {{- $message := printf "version label value - {%s}" .TaskResult.postCheckCStorPoolVersionPatch.version -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-sp-version-post-check" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #12 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-deployment-version - namespace: default -spec: - meta: | - id: patchDeployment - apiVersion: extensions/v1beta1 - kind: Deployment - action: patch - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - task: |- - type: strategic - pspec: |- - metadata: - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - ownerReferences: - - apiVersion: openebs.io/v1alpha1 - blockOwnerDeletion: true - controller: true - kind: CStorPool - name: {{ .UpgradeItem.name }} - uid: {{ .TaskResult.CStorPool.uid }} - post: |- - {{- $message := printf "version label successfully patched Pool Deployment {%s} in {%s} namespace." .UpgradeItem.name .UpgradeItem.namespace -}} - - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-deployment-version" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #13 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-patch-deployment-version-post-check - namespace: default -spec: - meta: | - id: postCheckDeploymentVersionPatch - apiVersion: extensions/v1beta1 - kind: Deployment - action: get - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - task: | - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "postCheckDeploymentVersionPatch.version" .TaskResult | noop -}} - - {{- $status := "" -}} - - {{- if ne .TaskResult.postCheckDeploymentVersionPatch.version .Config.targetVersion.value }} - {{- $status = .Config.failStatus.value -}} - - {{- else }} - {{- $status = .Config.successStatus.value -}} - {{- end }} - - {{- $message := printf "version label value - {%s}" .TaskResult.postCheckDeploymentVersionPatch.version -}} - - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-patch-deployment-version-post-check" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #14 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-list-replicaset - namespace: default -spec: - meta: | - id: replicaSetList - apiVersion: extensions/v1beta1 - kind: ReplicaSet - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: app=cstor-pool,openebs.io/storage-pool-claim={{ .TaskResult.CStorPool.spcName }} - post: | - {{- $CustomJsonpath := printf "{range .items[?(@.metadata.ownerReferences[0].name== '%s')]}{@.metadata.name} {end}" .UpgradeItem.name -}} - {{- jsonpath .JsonResult $CustomJsonpath | trim | replace " " "," | saveAs "replicaSetList.list" .TaskResult | noop -}} - - {{- $status := .Config.successStatus.value -}} - {{- $message := printf "ReplicaSet list {%s}" .TaskResult.replicaSetList.list -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-list-replicaset" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #15 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-list-pod - namespace: default -spec: - meta: | - id: podList - apiVersion: v1 - kind: Pod - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: app=cstor-pool,openebs.io/storage-pool-claim={{ .TaskResult.CStorPool.spcName }} - post: | - {{- $CustomJsonpath := printf "{range .items[?(@.metadata.ownerReferences[0].name== '%s')]}{@.metadata.name}{end}" .TaskResult.newReplicaset.name -}} - {{- $podrsPairs := jsonpath .JsonResult "{range .items[*]}{@.metadata.name},{@.metadata.ownerReferences[0].name} {end}" | trim | default "" | splitList " " -}} - {{- $podrsPairs| saveAs "podList.map" .TaskResult -}} - - {{ $podName := "" }} - {{ $replicasetName := "" }} - {{ $replicaset := "" }} - {{ $match := "" }} - {{ $status := .Config.successStatus.value }} - {{ $replicasetList := .TaskResult.replicaSetList.list | splitList "," }} - - {{- range $k, $v := .TaskResult.podList.map }} - {{ $k := $k }} - {{ $v := $v }} - {{- $replicaset = $v | splitList "," | last -}} - {{- $match := pickContains $replicaset $replicasetList -}} - - {{- if ne $match "" }} - {{ $podName = $v | splitList "," | first }} - {{ $replicasetName = $v | splitList "," | last }} - {{- end }} - - {{ $match = "" }} - - {{- end }} - - {{ $staleReplicaset := .TaskResult.replicaSetList.list | replace $replicasetName ""}} - {{ $staleReplicaset = $staleReplicaset | replace ",," "," }} - {{ $staleReplicaset = $staleReplicaset | replace "," " " | trim }} - {{ $staleReplicaset = $staleReplicaset | replace " " "," }} - - {{- $podName | saveAs "podList.podName" .TaskResult -}} - {{- $replicasetName | saveAs "podList.replicasetName" .TaskResult -}} - {{- $staleReplicaset | saveAs "podList.staleReplicaset" .TaskResult -}} - - {{- $message := printf "pool Pod-ReplicaSet map: {%s}\nstale ReplicaSet list: {%s}" .TaskResult.podList.map .TaskResult.podList.staleReplicaset -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-list-pod" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #16 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-pool-082-090-delete-replicaset - namespace: default -spec: - meta: | - id: deleteOldReplicaset - apiVersion: extensions/v1beta1 - kind: ReplicaSet - action: delete - runNamespace: {{ .UpgradeItem.namespace }} - objectName: {{ .TaskResult.podList.staleReplicaset }} - disable: {{ eq .TaskResult.podList.staleReplicaset "" }} - post: | - {{- $message := printf "stale replicaset {%s} successfully deleted in {%s} namespace." .TaskResult.podList.staleReplicaset .UpgradeItem.namespace -}} - {{- $status := .Config.successStatus.value -}} - - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-delete-replicaset" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- diff --git a/k8s/upgrades/0.8.2-0.9.0/cstor/cstor-volume-update-082-090.yaml b/k8s/upgrades/0.8.2-0.9.0/cstor/cstor-volume-update-082-090.yaml deleted file mode 100644 index abafe71e1f..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/cstor/cstor-volume-update-082-090.yaml +++ /dev/null @@ -1,1064 +0,0 @@ -# Sample runtasks for upgrading a cstor volume - -# CASTemplate cstor-volume-update-082-090 is -# used to upgrade a cstor volume -apiVersion: openebs.io/v1alpha1 -kind: CASTemplate -metadata: - name: cstor-volume-update-082-090 -spec: - defaultConfig: - # Base version is the version from which upgrade can happen. - # This CAS template is does not support upgrade of OpenEBS version - # whose version is anything other that specified base version - # Using this CAS templates, once can upgrade from OpenEBS version - # 0.8.2 to 0.9.0 only. - - name: baseVersion - value: "0.8.2" - - name: targetVersion - value: "0.9.0" - - name: successStatus - value: "Success" - - name: failStatus - value: "Fail" - run: - tasks: - # Runtask #1 - # This runatask will patch the upgraderesults cr with the pv detail which - # is undergoing upgrade. - - upgrade-cstor-volume-082-090-patch-upgrade-result - - # Runtask #2 - # This runtask will get the details of pv and verify it. - - upgrade-cstor-volume-082-090-get-pv - - # Runtask #3 - # This runtask will get the details cStor target deployment and verify its current version - # is in expected base version. - # For every cStor volume there exists only one target deployment and this runtask - # is developed on the same assumption - - upgrade-cstor-volume-082-090-list-target-deployment - - - upgrade-cstor-volume-082-090-pre-check-target-pod-phase - - # Runtask #4 - # This runtask will get the details cStor target service and verify its current version - # is in expected base version. - # For every cStor volume there exists only one target service and this runtask - # is developed on the same assumption - - upgrade-cstor-volume-082-090-list-target-svc - - # Runtask #5 - # This runtask will get the details cstorvolume custom resource and verify its current version - # is in expected base version. - # For every cStor volume there exists only one cstorvolume custom resource and this runtask - # is developed on the same assumption - - upgrade-cstor-volume-082-090-list-cstorvolume - - # Runtask #6 - # This runtask will get the details cstorvolumereplica custom resource and verify its current version - # is in expected base version. - # This runtask also decides whether the volume is a cloned volume or not. This information - # is saved in a task result variable to be used later by other runtasks. - - upgrade-cstor-volume-082-090-list-cstorvolumereplicas - - # Runtask #7 - # This runtask will patch the cStor target deployment containers with the - # target version. - - upgrade-cstor-volume-082-090-patch-target-deployment-latest-image - - # Runtask #8 - # This runtask will verify that the cStor target deployment containers after - # the patch has been rolled out successfully. - - upgrade-cstor-volume-082-090-post-check-deployment-rollout-status-latest-image - - # Runtask #9 - # This runtask will verify that the cStor target deployment containers - # that are running successfully has the appropriate target version. - - upgrade-cstor-volume-082-090-post-check-patch-deployment-image - - # Runtask #10 - # This runtask will patch the cStor target service - # with the version label of target version. - # It also patches with PVC label. - - upgrade-cstor-volume-082-090-patch-target-svc - - # Runtask #11 - # This runtask will check that the version labels has been - # successfully updated for cstor target service. - - upgrade-cstor-volume-082-090-post-check-target-svc - - # Runtask #12 - # This runtask will patch the cstorvolume resource - # with the version label of target version. - # This runtask will also add source volume label to - # cstorvolume custom resource if the volume is a cloned - # volume. - - upgrade-cstor-volume-082-090-patch-cstor-volume - - # Runtask #13 - # This runtask will check that the version labels has been - # successfully updated for cstorvolume. - - upgrade-cstor-volume-082-090-post-check-cstor-volume-cr - - # Runtask #14 - # This runtask will patch the CVRs ( i.e. cstorvolumereplicas of the volume) - # with the version label of target version. - - upgrade-cstor-volume-082-090-patch-cstor-volume-replica - - # Runtask #15 - # This runtask will check that the version labels has been - # successfully updated for the CVRs. - - upgrade-cstor-volume-082-090-post-check-cstor-volume-replicas - - # Runtask #16 - # This runtask will list all the replicaset of the cstor - # target deployment - - upgrade-cstor-volume-082-090-list-target-replicaset - - # Runtask #17 - # This runtask will list the current running pod of the cStor - # target deployment and help figure out the stale replicaset - # entries of the cstor target deployment. - - upgrade-cstor-volume-082-090-list-target-pod - - # Runtask #18 - # This runtask will delete the stale cStor target deployment. - - upgrade-cstor-volume-082-090-delete-stale-replicaset-target - - # Runtask #19 - # This runtask will list the snapshots related to this volume - - upgrade-cstor-volume-082-090-list-volumesnapshot - - # Runtask #20 - # This runtask will patched the snapshotdata with capacity - - upgrade-cstor-volume-082-090-patch-volumesnapshotdatadata - - # Runtask #21 - # This runtask will post check the patched snapshotdata - - upgrade-cstor-volume-082-090-post-check-volumesnapshotdata - taskNamespace: default ---- -# Runtask #1 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-patch-upgrade-result - namespace: default -spec: - meta: | - id: patchResult - apiVersion: openebs.io/v1alpha1 - kind: UpgradeResult - action: patch - objectName: {{ .UpgradeItem.upgradeResultName }} - runNamespace: default - post: | - {{- $message := printf "upgradeResult {%s} has been patched with basic details such as name and namespace of the resource to be upgraded." .UpgradeItem.upgradeResultName -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-patch-upgrade-result" -}} - {{- $taskStatus := upgradeResultWithTaskStatus "successful" -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- $taskRetries := upgradeResultWithTaskRetries 7 -}} - {{- upgradeResultUpdateTasks $taskStartTime $taskRetries $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - task: |- - type: merge - pspec: |- - status: - resource: - name: {{ .UpgradeItem.name }} - namespace: {{ .UpgradeItem.namespace }} - kind: {{ .UpgradeItem.kind }} ---- -# Runtask #2 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-get-pv - namespace: default -spec: - meta: | - id: getPVDetails - apiVersion: v1 - kind: PersistentVolume - action: get - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/cas-type}" | trim | saveAs "getPVDetails.volCASType" .TaskResult | noop -}} - {{- .TaskResult.getPVDetails.volCASType | notFoundErr "volume CAS type not found" | saveIf "getPVDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.storageClassName}" | trim | saveAs "getPVDetails.storageClassName" .TaskResult | noop -}} - {{- .TaskResult.getPVDetails.storageClassName | notFoundErr "storage class name not found for given volume" | saveIf "getPVDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.claimRef.namespace}" | trim | saveAs "getPVDetails.pvcNamespace" .TaskResult | noop -}} - {{- .TaskResult.getPVDetails.pvcNamespace | notFoundErr "pvc namespace not found for given volume" | saveIf "getPVDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.claimRef.name}" | trim | saveAs "getPVDetails.pvcName" .TaskResult | noop -}} - {{- .TaskResult.getPVDetails.pvcName | notFoundErr "pvc name not found for given volume" | saveIf "getPVDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.capacity.storage}" | trim | saveAs "getVolDetails.pvCapacity" .TaskResult | noop -}} - {{- .TaskResult.getVolDetails.pvCapacity | notFoundErr "pv capacity not for given volume" | saveIf "getVolDetails.notFoundErr" .TaskResult | noop -}} - - {{- $message := printf "Successfully got details of PV {%s}." .UpgradeItem.name -}} - {{- $status :=.Config.successStatus.value }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-get-pv" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #3 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-list-target-deployment - namespace: default -spec: - meta: | - id: listTargetDeployment - apiVersion: extensions/v1beta1 - kind: Deployment - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/target=cstor-target - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listTargetDeployment.targetDeploymentName" .TaskResult | noop -}} - {{- .TaskResult.listTargetDeployment.targetDeploymentName | notFoundErr "volume target deployment not found" | saveIf "listTargetDeployment.notFoundErr" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/version}" | trim | saveAs "listTargetDeployment.labels" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/persistent-volume-claim}" | trim | saveAs "listTargetDeployment.pvcName" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[*].spec.replicas}" | trim | saveAs "listTargetDeployment.replicaCount" .TaskResult | noop -}} - {{- .TaskResult.listTargetDeployment.replicaCount | notFoundErr "replicas not found for cstor target deployment" | saveIf "listTargetDeployment.notFoundErr" .TaskResult | noop -}} - - {{- $message := printf "the target deployment for this PV {%s} is : {%s}" .UpgradeItem.name .TaskResult.listTargetDeployment.targetDeploymentName -}} - {{- $status :="" }} - {{- $verifyErrMessage := "cStor volume target version is not 0.8.2" -}} - - {{- $isBaseVersion := eq .TaskResult.listTargetDeployment.labels .Config.baseVersion.value }} - {{- $isTargetVersion := eq .TaskResult.listTargetDeployment.labels .Config.targetVersion.value }} - {{- if or $isBaseVersion $isTargetVersion -}} - {{- $status =.Config.successStatus.value }} - {{- else }} - {{- $status =.Config.failStatus.value }} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-get-cstorpool" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "listTargetDeployment.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-pre-check-target-pod-phase - namespace: default -spec: - meta: | - id: targetPodList - apiVersion: v1 - kind: Pod - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/target=cstor-target - post: | - {{- $CustomJsonPath := printf "{.items[?(@.status.phase=='Running')].metadata.name}" -}} - {{- $ErrMsg := printf "No running pods found for cstor volume: {%s}" .UpgradeItem.name -}} - - {{- jsonpath .JsonResult $CustomJsonPath | trim | saveAs "targetPodList.podName" .TaskResult | noop -}} - {{- .TaskResult.targetPodList.podName | notFoundErr $ErrMsg | saveIf "targetPodList.notFoundErr" .TaskResult | noop -}} - - {{- .TaskResult.poolPodList.podName | default "" | splitList " " | len | saveAs "poolPodList.actualRunningPodCount" .TaskResult -}} - - {{- $expectedPodCount := .TaskResult.poolDeployment.replicaCount | int -}} - {{- $msg := printf "expected %v no of running replica pod(s), found only %v replica pod(s)" $expectedPodCount .TaskResult.poolPodList.actualRunningPodCount -}} - {{- .TaskResult.poolPodList.podName | default "" | splitList " " | isLen $expectedPodCount | not | verifyErr $msg | saveIf "poolPodList.verifyErr" .TaskResult | noop -}} - - {{- $message := printf "target pods are in running phase for volume: {%s}" .UpgradeItem.name -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-pool-082-090-pre-check-pool-pod-phase" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #4 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-list-target-svc - namespace: default -spec: - meta: | - id: listTargetService - apiVersion: v1 - kind: Service - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/target-service=cstor-target-svc - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listTargetService.items" .TaskResult | noop -}} - {{- .TaskResult.listTargetService.items | notFoundErr "volume target service not found" | saveIf "listTargetService.notFoundErr" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.items[0].metadata.labels.openebs\\.io/version}" | trim | saveAs "listTargetService.version" .TaskResult | noop -}} - - {{- $message := printf "the target service for this volume {%s} is : {%s}" .UpgradeItem.name .TaskResult.listTargetService.items -}} - {{- $status :="" }} - {{- $verifyErrMessage := "cStor volume target service version is not 0.8.2" -}} - - {{- $isBaseVersion := eq .TaskResult.listTargetService.version .Config.baseVersion.value -}} - {{- $isTargetVersion := eq .TaskResult.listTargetService.version .Config.targetVersion.value -}} - {{- if or $isBaseVersion $isTargetVersion -}} - {{- $status =.Config.successStatus.value }} - {{- else }} - {{- $status =.Config.failStatus.value }} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-list-target-svc" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "listTargetService.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -#Runtask #5 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-list-cstorvolume - namespace: default -spec: - meta: | - id: listCStorVolume - apiVersion: openebs.io/v1alpha1 - kind: CStorVolume - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }} - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listCStorVolume.items" .TaskResult | noop -}} - {{- .TaskResult.listCStorVolume.items | notFoundErr "cstor volume cr not found" | saveIf "listCStorVolume.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[0].metadata.labels.openebs\\.io/version}" | trim | saveAs "listCStorVolume.labels" .TaskResult | noop -}} - - {{- $message := printf "cStor volume for this PV {%s} is : {%s}" .UpgradeItem.name .TaskResult.listCStorVolume.items -}} - {{- $status :="" }} - {{- $verifyErrMessage:= "cStor volume version is not 0.8.2" -}} - - {{- $isBaseVersion := eq .TaskResult.listCStorVolume.labels .Config.baseVersion.value -}} - {{- $isTargetVersion := eq .TaskResult.listCStorVolume.labels .Config.targetVersion.value -}} - {{- if or $isBaseVersion $isTargetVersion -}} - {{- $status =.Config.successStatus.value }} - {{- else }} - {{- $status =.Config.failStatus.value }} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-list-cstorvolume" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "listCStorVolume.verifyErr" .TaskResult | noop -}} - {{- end }} - ---- -# Runtask #6 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-list-cstorvolumereplicas - namespace: default -spec: - meta: | - id: listCStorVolumeReplica - apiVersion: openebs.io/v1alpha1 - kind: CStorVolumeReplica - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }} - post: | - {{- jsonpath .JsonResult "{range .items[*]}{@.metadata.name} {end}" | trim | saveAs "listCStorVolumeReplica.items" .TaskResult | noop -}} - {{- .TaskResult.listCStorVolumeReplica.items | notFoundErr "cstor volume replicas not found" | saveIf "listCStorVolumeReplica.notFoundErr" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/version}" | trim | toString | saveAs "listCStorVolumeReplica.version" .TaskResult | noop -}} - - - {{- jsonpath .JsonResult "{.items[0].metadata.labels.openebs\\.io/cloned}" | trim | saveAs "listCStorVolumeReplica.isCloned" .TaskResult | noop -}} - - {{- if eq .TaskResult.listCStorVolumeReplica.isCloned "true" }} - {{- jsonpath .JsonResult "{.items[0].metadata.annotations.openebs\\.io/source-volume}" | trim | saveAs "listCStorVolumeReplica.sourceVolume" .TaskResult | noop -}} - {{- else }} - {{- "" | saveAs "listCStorVolumeReplica.sourceVolume" .TaskResult | noop -}} - {{- end }} - - {{- $status :="" }} - {{- $message :="" }} - {{- $successMessage := printf "Got details of CVR(s), {%s} , successfully for {%s}" .UpgradeItem.name .TaskResult.listCStorVolume.items -}} - {{- $errorMessage := printf "CVR(s) {%s} version not in 0.8.2" .ListItems.cvrVersionList.version -}} - {{- $verifyErrMessage := "cStor volume replica(s) version is not 0.8.2" -}} - {{- $inVersion := "0.8.2" }} - {{- $baseVersion := .Config.baseVersion.value -}} - {{- $targetVersion := .Config.targetVersion.value -}} - - {{- $versionList := jsonpath .JsonResult "{range .items[*]}pkey=version,{@.metadata.name}={@.metadata.labels.openebs\\.io/version};{end}" | trim | default "" | splitList ";" -}} - {{- $versionList | keyMap "cvrVersionList" .ListItems | noop -}} - - {{- range $cvr, $version := .ListItems.cvrVersionList.version }} - {{- $isNotBaseVersion := ne $version $baseVersion -}} - {{- $isNotTargetVersion := ne $version $targetVersion -}} - {{- if and $isNotBaseVersion $isNotTargetVersion }} - - {{- $inVersion = "false" }} - {{- end }} - {{- end }} - - {{- if contains "0.8.2" $inVersion }} - {{- $message = $successMessage -}} - {{- $status =.Config.successStatus.value }} - - {{- else }} - {{- $message = $errorMessage -}} - {{- $status =.Config.failStatus.value }} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-list-cstorvolumereplicas" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "listCStorVolumeReplica.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -# Runtask #7 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-patch-target-deployment-latest-image - namespace: default -spec: - meta: | - id: patchTargetDeployment - apiVersion: extensions/v1beta1 - kind: Deployment - runNamespace: {{ .UpgradeItem.namespace }} - objectName: {{ .TaskResult.listTargetDeployment.targetDeploymentName }} - action: patch - task: |- - type: strategic - pspec: |- - metadata: - labels: - openebs.io/version: {{ .Config.targetVersion.value}} - spec: - template: - metadata: - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - spec: - containers: - - name: cstor-istgt - image: quay.io/openebs/cstor-istgt:{{ .Config.targetVersion.value}} - - name: maya-volume-exporter - image: quay.io/openebs/m-exporter:{{ .Config.targetVersion.value}} - - name: cstor-volume-mgmt - image: quay.io/openebs/cstor-volume-mgmt:{{ .Config.targetVersion.value}} - post: |- - {{- $message := printf "Successfully patched target deployment {%s}." .TaskResult.listTargetDeployment.targetDeploymentName -}} - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-patch-target-deployment-latest-image" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #8 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-post-check-deployment-rollout-status-latest-image - namespace: default -spec: - meta: | - id: postCheckDeploymentRollout - apiVersion: extensions/v1beta1 - kind: Deployment - action: rolloutstatus - objectName: {{ .TaskResult.listTargetDeployment.targetDeploymentName }} - runNamespace: {{ .UpgradeItem.namespace }} - retry: "20,20s" - post: | - {{- jsonpath .JsonResult "{.isRolledout}" | trim | saveAs "postCheckDeploymentRollout.isRolledout" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.message}" | trim | saveAs "postCheckDeploymentRollout.rolloutStatus" .TaskResult | noop -}} - - {{- $status := "" -}} - {{- $verifyErrMessage := "Target deployment roll out not successful" -}} - - {{- if eq .TaskResult.postCheckDeploymentRollout.isRolledout "true" }} - {{- $status = .Config.successStatus.value -}} - {{- else }} - {{- "waiting for deployment rollout" | saveAs "postCheckDeploymentRollout.verifyErr" .TaskResult | noop -}} - {{- $status = .Config.failStatus.value -}} - {{- end }} - - {{- $message := printf "%s name: {%s} namespace: {%s}" .TaskResult.patchDeploymentImageStatus.rolloutStatus .UpgradeItem.name .UpgradeItem.namespace -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-post-check-deployment-rollout-status-latest-image" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "postCheckDeploymentRollout.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -# Runtask #9 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-post-check-patch-deployment-image - namespace: default -spec: - meta: | - id: postCheckDeploymentImagePatch - apiVersion: extensions/v1beta1 - kind: Deployment - action: get - objectName: {{ .TaskResult.listTargetDeployment.targetDeploymentName }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='cstor-istgt')].image}" | trim | saveAs "postCheckDeploymentImagePatch.istgtimage" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='maya-volume-exporter')].image}" | trim | saveAs "postCheckDeploymentImagePatch.exporterimage" .TaskResult | noop -}} - {{- jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='cstor-volume-mgmt')].image}" | trim | saveAs "postCheckDeploymentImagePatch.volmgmtimage" .TaskResult | noop -}} - - {{ $isNewIstgt := contains .Config.targetVersion.value .TaskResult.postCheckDeploymentImagePatch.istgtimage }} - {{ $isNewExporter := contains .Config.targetVersion.value .TaskResult.postCheckDeploymentImagePatch.exporterimage }} - {{ $isNewVolMgmt := contains .Config.targetVersion.value .TaskResult.postCheckDeploymentImagePatch.volmgmtimage }} - - - - {{- $status := "" -}} - {{- $verifyErrMessage := "Target pod container images not foud in target version" -}} - - {{- if and $isNewIstgt $isNewExporter $isNewVolMgmt }} - {{- $status = .Config.successStatus.value -}} - {{- else }} - {{- $status = .Config.failStatus.value -}} - {{- end }} - - {{- $message := printf "pool image :{%s} pool mgmt image :{%s} maya-exporter image : {%s}" .TaskResult.postCheckDeploymentImagePatch.cstorpoolimage .TaskResult.postCheckDeploymentImagePatch.cstorpoolmgmtimage .TaskResult.postCheckDeploymentImagePatch.mayaexporterimage -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-post-check-patch-deployment-image" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "postCheckDeploymentImagePatch.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -# Runtask 10 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-patch-target-svc - namespace: default -spec: - meta: | - id: patchTargetSVC - apiVersion: v1 - kind: Service - runNamespace: {{ .UpgradeItem.namespace }} - objectName: {{ .TaskResult.listTargetService.items }} - action: patch - task: |- - type: merge - pspec: |- - metadata: - labels: - openebs.io/persistent-volume-claim: {{ .TaskResult.listTargetDeployment.pvcName }} - openebs.io/version: {{ .Config.targetVersion.value }} - post: |- - {{- $message := printf "version label successfully patched for target service {%s}." .TaskResult.listTargetService.items -}} - - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-patch-target-svc" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #11 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-post-check-target-svc - namespace: default -spec: - meta: | - id: postCheckTargetSVC - apiVersion: v1 - kind: Service - action: get - objectName: {{ .TaskResult.listTargetService.items }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "postCheckTargetSVC.version" .TaskResult | noop -}} - - - {{- $status := "" -}} - {{- if ne .TaskResult.postCheckTargetSVC.version .Config.targetVersion.value }} - {{- $status = .Config.failStatus.value -}} - {{- else }} - {{- $status = .Config.successStatus.value -}} - {{- end }} - {{- $message := printf "version label value for target service- {%s}" .TaskResult.postCheckTargetSVC.version -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-post-check-target-svc" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #12 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-patch-cstor-volume - namespace: default -spec: - meta: | - id: patchCStorVolume - apiVersion: openebs.io/v1alpha1 - kind: CStorVolume - runNamespace: {{ .UpgradeItem.namespace }} - objectName: {{ .TaskResult.listCStorVolume.items }} - action: patch - task: |- - type: merge - pspec: |- - metadata: - labels: - {{- if eq .TaskResult.listCStorVolumeReplica.isCloned "true" }} - openebs.io/source-volume: {{ .TaskResult.listCStorVolumeReplica.sourceVolume }} - {{- end }} - openebs.io/version: {{ .Config.targetVersion.value }} - post: | - {{- $message := printf "version label successfully patched for cstor volume {%s}." .TaskResult.listCStorVolume.items -}} - - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-patch-cstor-volume" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #13 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-post-check-cstor-volume-cr - namespace: default -spec: - meta: | - id: postCheckCStorVolume - apiVersion: openebs.io/v1alpha1 - kind: CStorVolume - action: get - objectName: {{ .TaskResult.listCStorVolume.items }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "postCheckCStorVolume.version" .TaskResult | noop -}} - - {{- $status := "" -}} - {{- if ne .TaskResult.postCheckCStorVolume.version .Config.targetVersion.value }} - {{- $status = .Config.failStatus.value -}} - {{- else }} - {{- $status = .Config.successStatus.value -}} - {{- end }} - {{- $message := printf "version label value for target service- {%s}" .TaskResult.postCheckCStorVolume.version -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-post-check-cstor-volume-cr" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #14 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-patch-cstor-volume-replica - namespace: default -spec: - meta: | - {{- $cstorVolReplicaList := .TaskResult.listCStorVolumeReplica.items | default "" | splitList " " -}} - id: patchCStorVolumeReplica - apiVersion: openebs.io/v1alpha1 - kind: CStorVolumeReplica - runNamespace: {{ .UpgradeItem.namespace }} - action: patch - repeatWith: - metas: - {{- range $k, $cvr := $cstorVolReplicaList }} - - objectName: {{ $cvr }} - {{- end }} - task: |- - type: merge - pspec: |- - metadata: - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - post: | - - {{- $message := printf "successfully patched cStor volume replicas {%s}." .TaskResult.listCStorVolumeReplica.items -}} - - {{- $status := .Config.successStatus.value -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-patch-cstor-volume-replica" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -# Runtask #15 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-post-check-cstor-volume-replicas - namespace: default -spec: - meta: | - id: postCheckCStorVolumeReplica - apiVersion: openebs.io/v1alpha1 - kind: CStorVolumeReplica - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }} - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/version}" | trim | saveAs "postCheckCStorVolumeReplica.version" .TaskResult | noop -}} - - {{- $message := printf "CVRs {%s} version got successfully updated" .TaskResult.listCStorVolumeReplica.items -}} - {{- $status :="" }} - {{- $verifyErrMessage := "cStor volume replica(s) version is not 0.9.0" -}} - {{- $errorMessage := printf "CVRs {%s} version update failed" .TaskResult.listCStorVolumeReplica.items -}} - - {{- if contains .Config.baseVersion.value .TaskResult.postCheckCStorVolumeReplica.version }} - {{- $status =.Config.failStatus.value }} - {{- $message = $errorMessage -}} - {{- else }} - {{- $status =.Config.successStatus.value }} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-post-check-cstor-volume-replicas" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - - {{- if eq $status .Config.failStatus.value }} - {{- verifyErr $verifyErrMessage true | saveAs "postCheckCStorVolumeReplica.verifyErr" .TaskResult | noop -}} - {{- end }} ---- -# Runtask #16 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-list-target-replicaset - namespace: default -spec: - meta: | - id: listReplicaSet - apiVersion: extensions/v1beta1 - kind: ReplicaSet - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},app=cstor-volume-manager - post: | - {{- $CustomJsonpath := printf "{range .items[?(@.metadata.ownerReferences[0].name== '%s-target')]}{@.metadata.name} {end}" .UpgradeItem.name -}} - {{- jsonpath .JsonResult $CustomJsonpath | trim | replace " " "," | saveAs "listReplicaSet.list" .TaskResult | noop -}} - - {{- $status := .Config.successStatus.value -}} - {{- $message := printf "ReplicaSet list {%s}" .TaskResult.listReplicaSet.list -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-list-target-replicaset" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #17 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-list-target-pod - namespace: default -spec: - meta: | - id: listPod - apiVersion: v1 - kind: Pod - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},app=cstor-volume-manager - post: | - {{- $CustomJsonpath := printf "{range .items[?(@.metadata.ownerReferences[0].name== '%s')]}{@.metadata.name}{end}" .TaskResult.newReplicaset.name -}} - {{- $podrsPairs := jsonpath .JsonResult "{range .items[*]}{@.metadata.name},{@.metadata.ownerReferences[0].name} {end}" | trim | default "" | splitList " " -}} - {{- $podrsPairs| saveAs "listPod.map" .TaskResult -}} - - {{ $podName := "" }} - {{ $replicasetName := "" }} - {{ $replicaset := "" }} - {{ $match := "" }} - {{ $status := .Config.successStatus.value }} - {{ $replicasetList := .TaskResult.listReplicaSet.list | splitList "," }} - - {{- range $k, $v := .TaskResult.listPod.map }} - {{ $k := $k }} - {{ $v := $v }} - {{- $replicaset = $v | splitList "," | last -}} - {{- $match := pickContains $replicaset $replicasetList -}} - - {{- if ne $match "" }} - {{ $podName = $v | splitList "," | first }} - {{ $replicasetName = $v | splitList "," | last }} - {{- end }} - - {{ $match = "" }} - - {{- end }} - - {{ $staleReplicaset := .TaskResult.listReplicaSet.list | replace $replicasetName ""}} - {{ $staleReplicaset = $staleReplicaset | replace ",," "," }} - {{ $staleReplicaset = $staleReplicaset | replace "," " " | trim }} - {{ $staleReplicaset = $staleReplicaset | replace " " "," }} - - {{- $podName | saveAs "listPod.podName" .TaskResult -}} - {{- $replicasetName | saveAs "listPod.replicasetName" .TaskResult -}} - {{- $staleReplicaset | saveAs "listPod.staleReplicaset" .TaskResult -}} - - {{- $message := printf "pool Pod-ReplicaSet map: {%s}\nstale ReplicaSet list: {%s}" .TaskResult.listPod.map .TaskResult.listPod.staleReplicaset -}} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-list-target-pod" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -# Runtask #18 -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-delete-stale-replicaset-target - namespace: default -spec: - meta: | - id: deleteStaleReplicaSet - apiVersion: extensions/v1beta1 - kind: ReplicaSet - action: delete - runNamespace: {{ .UpgradeItem.namespace }} - objectName: {{ .TaskResult.listPod.staleReplicaset }} - disable: {{ eq .TaskResult.listPod.staleReplicaset "" }} - post: | - {{- $message := printf "stale replicaset {%s} successfully deleted in {%s} namespace." .TaskResult.listPod.staleReplicaset .UpgradeItem.namespace -}} - - {{- $taskStatus := upgradeResultWithTaskStatus .Config.successStatus.value -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-delete-stale-replicaset-target" -}} - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-list-volumesnapshot - namespace: default -spec: - meta: | - id: listVolumeSnapshotDetails - apiVersion: v1 - kind: VolumeSnapshot - action: list - options: |- - labelSelector: SnapshotMetadata-PVName={{ .UpgradeItem.name }} - post: | - {{- jsonpath .JsonResult "{.items[*].spec.snapshotDataName}" | trim | saveAs "listVolumeSnapshotDetails.snapshotDataNames" .TaskResult | noop -}} - - {{- .TaskResult.listVolumeSnapshotDetails.snapshotDataNames | toString | saveAs "listVolumeSnapshotDetails.volumeSnapshotData" .TaskResult | noop -}} - {{- if eq .TaskResult.listVolumeSnapshotDetails.volumeSnapshotData "" }} - {{- printf "false" | saveAs "listVolumeSnapshotDetails.isExist" .TaskResult }} - {{- else }} - {{- printf "true" | saveAs "listVolumeSnapshotDetails.isExist" .TaskResult }} - {{- end }} - - {{- $message := printf "details of volumesnapshotdata {%s} for volume: {%s}" .TaskResult.listVolumeSnapshotDetails.snapshotDataNames .UpgradeItem.name -}} - {{- $status :=.Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-list-volumesnapshot" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-patch-volumesnapshotdatadata - namespace: default -spec: - meta: | - {{- $snapshotDataList := .TaskResult.listVolumeSnapshotDetails.snapshotDataNames | default "" | splitList " " -}} - id: patchSnapData - apiVersion: v1 - kind: VolumeSnapshotData - action: patch - disable: {{ eq .TaskResult.listVolumeSnapshotDetails.isExist "false" }} - repeatWith: - metas: - {{- range $k, $snapData := $snapshotDataList }} - - objectName: {{ $snapData }} - {{- end }} - task: |- - type: merge - pspec: |- - spec: - openebsVolume: - capacity: {{ .TaskResult.getVolDetails.pvCapacity }} - post: | - {{- $message := printf "volume snapshotdatas are patched with capacity: {%s}" .TaskResult.getVolDetails.pvCapacity -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-patch-volumesnapshotdatadata" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-cstor-volume-082-090-post-check-volumesnapshotdata - namespace: default -spec: - meta: | - {{- $snapshotDataList := .TaskResult.listVolumeSnapshotDetails.snapshotDataNames | default "" | splitList " " -}} - id: getVolumeSnapshotDataDetails - apiVersion: v1 - kind: VolumeSnapshotData - action: get - disable: {{ eq .TaskResult.listVolumeSnapshotDetails.isExist "false" }} - repeatWith: - metas: - {{- range $k, $snapData := $snapshotDataList }} - - objectName: {{ $snapData }} - {{- end }} - post: | - {{- jsonpath .JsonResult "{.spec.openebsVolume.capacity}" | trim | saveAs "getVolumeSnapshotDataDetails.capacity" .TaskResult | noop -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-cstor-volume-082-090-post-check-volumesnapshotdata" -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - - {{- $status := "" -}} - {{- $message := "" -}} - - {{- if eq .TaskResult.getVolumeSnapshotDataDetails.capacity .TaskResult.getVolDetails.pvCapacity }} - {{- $status = .Config.successStatus.value -}} - {{- $message = printf "patched volume snapshot data successfully" -}} - {{- else }} - {{- $status = .Config.failStatus.value -}} - {{- $message = printf "failed to patch volume snapshot data" -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} diff --git a/k8s/upgrades/0.8.2-0.9.0/cstor/pool-upgrade-job.yaml b/k8s/upgrades/0.8.2-0.9.0/cstor/pool-upgrade-job.yaml deleted file mode 100644 index b0be646f38..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/cstor/pool-upgrade-job.yaml +++ /dev/null @@ -1,61 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: pool-upgrade-config - namespace: default -data: - upgrade: | - casTemplate: cstor-pool-update-082-090 - # Enter the names of cstorpool custom resource - # to upgrade the cStor pool. - - # You can choose to upgrade either one cStor pool - # or more than 1 - resources: - # Put the name of cstor pool resource that you - # want to upgrade. - # Command to view the cstorpool resource : - # `kubectl get csp` - - name: cstor-sparse-pool-dwc3 - kind: cStorPool - namespace: openebs - # Similarity, you can fill details below for other cstorpool - # upgrades. - # If not required, delete it. - - name: cstor-sparse-pool-efx5 - kind: cStorPool - namespace: openebs - - - name: cstor-sparse-pool-l0ti - kind: cStorPool - namespace: openebs ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: spc-cstor-pool-upgrade -spec: - template: - spec: - serviceAccountName: super-admin - containers: - - name: upgrade - image: openebs/m-upgrade:0.9.0 - volumeMounts: - - name: config - mountPath: /etc/config - readOnly: true - env: - - name: OPENEBS_IO_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: OPENEBS_IO_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumes: - - name: config - configMap: - name: pool-upgrade-config - restartPolicy: Never diff --git a/k8s/upgrades/0.8.2-0.9.0/cstor/volume-upgrade-job.yaml b/k8s/upgrades/0.8.2-0.9.0/cstor/volume-upgrade-job.yaml deleted file mode 100644 index 65bca4487e..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/cstor/volume-upgrade-job.yaml +++ /dev/null @@ -1,49 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: cstor-upgrade-config - namespace: default -data: - upgrade: | - casTemplate: cstor-volume-update-082-090 - resources: - # Enter the cstorvolume cr name to upgrade the cStor volume - # Command to get the cstorvolume is : kubectl get cstorvolume -n openebs - - # Also, the name of cstorvolume custom resource and - # pv is same. So one can enter the pv name also - # Command: kubectl get pv - - name: pvc-273352ab-7881-11e9-a8d4-42010a80004f - kind: cstor-volume - namespace: openebs ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: cstor-volume-upgrade -spec: - template: - spec: - serviceAccountName: super-admin - containers: - - name: upgrade - image: openebs/m-upgrade:0.9.0 - imagePullPolicy: Always - volumeMounts: - - name: config - mountPath: /etc/config - readOnly: true - env: - - name: OPENEBS_IO_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: OPENEBS_IO_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumes: - - name: config - configMap: - name: cstor-upgrade-config - restartPolicy: Never diff --git a/k8s/upgrades/0.8.2-0.9.0/jiva/cr.yaml b/k8s/upgrades/0.8.2-0.9.0/jiva/cr.yaml deleted file mode 100644 index fbcc65d298..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/jiva/cr.yaml +++ /dev/null @@ -1,15 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: upgraderesults.openebs.io -spec: - group: openebs.io - names: - kind: UpgradeResult - plural: upgraderesults - shortNames: - - uresult - singular: upgraderesult - scope: Namespaced - version: v1alpha1 diff --git a/k8s/upgrades/0.8.2-0.9.0/jiva/jiva_upgrade_runtask.yaml b/k8s/upgrades/0.8.2-0.9.0/jiva/jiva_upgrade_runtask.yaml deleted file mode 100644 index 4fd28dade8..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/jiva/jiva_upgrade_runtask.yaml +++ /dev/null @@ -1,1037 +0,0 @@ -# Sample Runtask for upgrading a jiva volume - -# CASTemplate jiva-volume-update-0.8.2-0.9.0 is -# used to upgrade a jiva volume -apiVersion: openebs.io/v1alpha1 -kind: CASTemplate -metadata: - name: jiva-volume-update-0.8.2-0.9.0 -spec: - defaultConfig: - # Base version is the version from which upgrade can happen. - # This CAS template is does not support upgrade of OpenEBS version - # whose version is anything other that specified base version - # Using this CAS templates, once can upgrade from OpenEBS version - # 0.8.2 to 0.9.0 only. - - name: baseVersion - value: "0.8.2" - - name: targetVersion - value: "0.9.0" - - name: successStatus - value: "Success" - - name: failStatus - value: "Fail" - run: - tasks: - # This runatask will patch the upgraderesults cr with the pv detail which - # is undergoing upgrade - - upgrade-jiva-volume-0.8.2-0.9.0-patch-upgrade-results - - # This runtask will get the volume details of given pv - - upgrade-jiva-volume-0.8.2-0.9.0-get-volume-details - - # This runtask will get the related StorageClass details for given PV - - upgrade-jiva-volume-0.8.2-0.9.0-get-sc-res-version - - # This runtask will get the details of jiva target deployment and verify - # its current version is in expected base version - - upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-deployment - - # This runtask will get the details of jiva replica deployment and verify - # its current version is in expected base version - - upgrade-jiva-volume-0.8.2-0.9.0-get-list-rep-deployment - - # This runtask will check the status of target pod - - upgrade-jiva-volume-0.8.2-0.9.0-pre-check-ctrl-pod-phase - - # This runtask will check the status of target pod - - upgrade-jiva-volume-0.8.2-0.9.0-pre-check-replica-pod-phase - - # This runtask will get the details of jiva controller (aka target) - # and verify its current version is in expected base version - - upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-svc - - # This runtask will list all the replicaset of the jiva - # target deployment - - upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-old-rs - - # This runtask will list all the replicaset of the jiva - # replica deployment - - upgrade-jiva-volume-0.8.2-0.9.0-get-list-rep-old-rs - - # This runtask will patch the jiva replica deployment containers with the - # target version and other required things for upgrade. - - upgrade-jiva-volume-0.8.2-0.9.0-patch-rep-deployment-latest-version - - # This runtask will verify that the jiva replica deployment containers after - # the patch has been rolled out successfully. - - upgrade-jiva-volume-0.8.2-0.9.0-post-check-rep-deployment-status-latest-version - - # This runtask will verify that the jiva replica deployment containers - # that are running successfully has the appropriate target version. - - upgrade-jiva-volume-0.8.2-0.9.0-post-check-rep-deployment-image - - # This runtask will delete the stale jiva replica deployment replicaset - - upgrade-jiva-volume-0.8.2-0.9.0-delete-old-rep-rs - - # This runtask will patch the jiva target deployment containers with the - # target version and other required things for upgrade. - - upgrade-jiva-volume-0.8.2-0.9.0-patch-ctrl-deployment-latest-version - - # This runtask will verify that the jiva target deployment containers after - # the patch has been rolled out successfully. - - upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-deployment-status-latest-version - - # This runtask will verify that the jiva target deployment containers - # that are running successfully has the appropriate target version. - - upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-deployment-image - - # This runtask will patch the jiva target service - # with the version label of target version. - - upgrade-jiva-volume-0.8.2-0.9.0-patch-ctrl-svc - - # This runtask will check that the version labels has been - # successfully updated for jiva target service. - - upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-svc - - # This runtask will delete the stale jiva target deployment replicaset - - upgrade-jiva-volume-0.8.2-0.9.0-delete-old-ctrl-rs - - # This runtask will list the snapshots related to this volume - - upgrade-jiva-volume-0.8.2-0.9.0-list-volumesnapshot - - # This runtask will patched the snapshotdata with capacity - - upgrade-jiva-volume-0.8.2-0.9.0-patch-volumesnapshotdatadata - - # This runtask will post check the patched snapshotdata - - upgrade-jiva-volume-0.8.2-0.9.0-post-check-volumesnapshotdata - taskNamespace: default ---- -## This will patch the upgrade result CR -# with basic details such as name, namespace and kind -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-patch-upgrade-results - namespace: default -spec: - meta: | - id: patchResult - apiVersion: openebs.io/v1alpha1 - kind: UpgradeResult - action: patch - objectName: {{ .UpgradeItem.upgradeResultName }} - runNamespace: {{ .UpgradeItem.upgradeResultNamespace }} - task: |- - type: merge - pspec: |- - status: - resource: - name: {{ .upgradeItem.name }} - namespace: {{ .upgradeItem.namespace }} - kind: {{ .upgradeItem.kind }} - post: | - {{- $message := printf "patched UpgradeResult {%s} with name and namespace of the resource to be upgraded" .UpgradeItem.upgradeResultName -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-patch-upgrade-results" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-get-volume-details - namespace: default -spec: - meta: | - id: getVolDetails - apiVersion: v1 - kind: PersistentVolume - action: get - objectName: {{ .UpgradeItem.name }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- jsonpath .JsonResult "{.metadata.annotations.openebs\\.io/cas-type}" | trim | saveAs "getVolDetails.volCASType" .TaskResult | noop -}} - {{- .TaskResult.getVolDetails.volCASType | notFoundErr "volume CAS type not found" | saveIf "getVolDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.storageClassName}" | trim | saveAs "getVolDetails.scName" .TaskResult | noop -}} - {{- .TaskResult.getVolDetails.scName | notFoundErr "sc name not found for given volume" | saveIf "getVolDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.claimRef.namespace}" | trim | saveAs "getVolDetails.pvcNamespace" .TaskResult | noop -}} - {{- .TaskResult.getVolDetails.pvcNamespace | notFoundErr "pvc namespace not found for given volume" | saveIf "getVolDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.claimRef.name}" | trim | saveAs "getVolDetails.pvcName" .TaskResult | noop -}} - {{- .TaskResult.getVolDetails.pvcName | notFoundErr "pvc name not found for given volume" | saveIf "getVolDetails.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.spec.capacity.storage}" | trim | saveAs "getVolDetails.pvCapacity" .TaskResult | noop -}} - {{- .TaskResult.getVolDetails.pvCapacity | notFoundErr "pv capacity not for given volume" | saveIf "getVolDetails.notFoundErr" .TaskResult | noop -}} - - {{- $message := printf "details of volume {%s}: volCASType: {%s}, storageClassName: {%s}, pvcName: {%s}, pvcNamespace: {%s}" .UpgradeItem.name .TaskResult.getVolDetails.volCASType .TaskResult.getVolDetails.scName .TaskResult.getVolDetails.pvcName .TaskResult.getVolDetails.pvcNamespace -}} - {{- $status :=.Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-get-volume-details" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-get-sc-res-version - namespace: default -spec: - meta: | - id: getSCDetails - apiVersion: storage.k8s.io/v1 - kind: StorageClass - action: get - objectName: {{ .TaskResult.getVolDetails.scName }} - post: | - {{- jsonpath .JsonResult "{.metadata.resourceVersion}" | trim | saveAs "getSCDetails.scResVersion" .TaskResult | noop -}} - {{- .TaskResult.getSCDetails.scResVersion | notFoundErr "sc resource version not found" | saveIf "getSCDetails.notFoundErr" .TaskResult | noop -}} - - {{- $message := printf "resource version for StorageClass {%s}: {%s}" .TaskResult.getVolDetails.scName .TaskResult.getSCDetails.scResVersion -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-get-sc-res-version" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-deployment - namespace: default -spec: - meta: | - id: listTargetDeployment - apiVersion: extensions/v1beta1 - kind: Deployment - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/controller=jiva-controller - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listTargetDeployment.deploymentName" .TaskResult | noop -}} - {{- .TaskResult.listTargetDeployment.deploymentName | notFoundErr "volume target deployment not found" | saveIf "listTargetDeployment.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/version}" | trim | saveAs "listTargetDeployment.version" .TaskResult | noop -}} - {{- .TaskResult.listTargetDeployment.version | notFoundErr "unknown openebs version" | saveIf "listTargetDeployment.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[*].spec.replicas}" | trim | saveAs "listTargetDeployment.replicaCount" .TaskResult | noop -}} - {{- .TaskResult.listTargetDeployment.replicaCount | notFoundErr "replicas not found for jiva controller deployment" | saveIf "listTargetDeployment.notFoundErr" .TaskResult | noop -}} - - {{- $message := "" -}} - {{- $status := "" -}} - - {{- $isVersionBase := eq .Config.baseVersion.value .TaskResult.listTargetDeployment.version -}} - {{- $isVersionTarget := eq .Config.targetVersion.value .TaskResult.listTargetDeployment.version -}} - {{- $isUpgradeContinue := or $isVersionBase $isVersionTarget -}} - - {{- if $isUpgradeContinue }} - {{- $message = printf "target deployment: {%s} is in expected version" .TaskResult.listTargetDeployment.deploymentName -}} - {{- $status = .Config.successStatus.value -}} - {{- else }} - {{- $message = printf "target deployment: {%s} is not in expected version expected: {%s} but got {%s}" .TaskResult.listTargetDeployment.deploymentName .Config.baseVersion.value .TaskResult.listTargetDeployment.version -}} - {{- not $isUpgradeContinue | verifyErr $message | saveAs "listTargetDeployment.verifyErr" .TaskResult | noop -}} - {{- $status = .Config.failStatus.value -}} - {{- end }} - - {{- print $isVersionBase | toString | saveAs "listTargetDeployment.shouldPatchCtrlDeployment" .TaskResult -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-deployment" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-get-list-rep-deployment - namespace: default -spec: - meta: | - id: listReplicaDeployment - apiVersion: extensions/v1beta1 - kind: Deployment - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/replica=jiva-replica - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listReplicaDeployment.deploymentName" .TaskResult | noop -}} - {{- .TaskResult.listReplicaDeployment.deploymentName | notFoundErr "replica deployment not found" | saveIf "listReplicaDeployment.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/version}" | trim | saveAs "listReplicaDeployment.version" .TaskResult | noop -}} - {{- .TaskResult.listTargetDeployment.version | notFoundErr "unknown openebs version" | saveIf "listReplicaDeployment.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[*].spec.replicas}" | trim | saveAs "listReplicaDeployment.replicaCount" .TaskResult | noop -}} - {{- .TaskResult.listReplicaDeployment.replicaCount | notFoundErr "replicas not found for jiva replica deployment" | saveIf "listReplicaDeployment.notFoundErr" .TaskResult | noop -}} - - {{- $message := "" -}} - {{- $status := "" -}} - - {{- $isVersionBase := eq .Config.baseVersion.value .TaskResult.listReplicaDeployment.version -}} - {{- $isVersionTarget := eq .Config.targetVersion.value .TaskResult.listReplicaDeployment.version -}} - {{- $isUpgradeContinue := or $isVersionBase $isVersionTarget -}} - - {{- if $isUpgradeContinue }} - {{- $message = printf "replica deployment: {%s} is in expected version" .TaskResult.listReplicaDeployment.deploymentName -}} - {{- $status = .Config.successStatus.value -}} - {{- else }} - {{- $message = printf "replica deployment: {%s} is not in expected version expected: {%s} but got {%s}" .TaskResult.listReplicaDeployment.deploymentName .Config.baseVersion.value .TaskResult.listReplicaDeployment.version -}} - {{- not $isUpgradeContinue | verifyErr $message | saveAs "listReplicaDeployment.verifyErr" .TaskResult | noop -}} - {{- $status = .Config.failStatus.value -}} - {{- end }} - - {{- print $isVersionBase | saveAs "listReplicaDeployment.shouldPatchRepDeployment" .TaskResult | noop -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-get-list-rep-deployment" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-pre-check-ctrl-pod-phase - namespace: default -spec: - meta: | - id: listCtrlPods - runNamespace: {{ .UpgradeItem.namespace }} - apiVersion: v1 - kind: Pod - action: list - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/controller=jiva-controller - post: | - {{- $CustomJsonPath := printf "{.items[?(@.status.phase=='Running')].metadata.name}" -}} - {{- $ErrMsg := printf "No running controller pods found for volume: {%s}" .UpgradeItem.name -}} - - {{- jsonpath .JsonResult $CustomJsonPath | trim | saveAs "listCtrlPods.podName" .TaskResult | noop -}} - {{- .TaskResult.listCtrlPods.podName | notFoundErr $ErrMsg | saveIf "listCtrlPods.notFoundErr" .TaskResult | noop -}} - - {{- .TaskResult.listCtrlPods.podName | default "" | splitList " " | len | saveAs "listCtrlPods.actualRunningPodCount" .TaskResult -}} - - {{- $expectedPodCount := .TaskResult.listTargetDeployment.replicaCount | int -}} - {{- $msg := printf "expected %v no of running replica pod(s), found only %v replica pod(s)" $expectedPodCount .TaskResult.listCtrlPods.actualRunningPodCount -}} - {{- .TaskResult.listCtrlPods.nodeNames | default "" | splitList " " | isLen $expectedPodCount | not | verifyErr $msg | saveIf "listCtrlPods.verifyErr" .TaskResult | noop -}} - - {{- $message := printf "jiva controller pods are in running phase for volume: {%s}" .UpgradeItem.name -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-pre-check-ctrl-pod-phase" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-pre-check-replica-pod-phase - namespace: default -spec: - meta: | - id: listReplicaPods - runNamespace: {{ .UpgradeItem.namespace }} - apiVersion: v1 - kind: Pod - action: list - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/replica=jiva-replica - post: | - {{- $CustomJsonPath := printf "{.items[?(@.status.phase=='Running')].spec.nodeName}" -}} - {{- $ErrMsg := printf "No running replica pods found for volume: {%s}" .UpgradeItem.name -}} - - {{- jsonpath .JsonResult $CustomJsonPath | trim | saveAs "listReplicaPods.nodeNames" .TaskResult | noop -}} - {{- .TaskResult.listReplicaPods.nodeNames | notFoundErr $ErrMsg | saveIf "listReplicaPods.notFoundErr" .TaskResult | noop -}} - - {{- .TaskResult.listReplicaPods.nodeNames | default "" | splitList " " | len | saveAs "listReplicaPods.actualRunningPodCount" .TaskResult -}} - - {{- $expectedPodCount := .TaskResult.listReplicaDeployment.replicaCount | int -}} - {{- $msg := printf "expected %v no of running replica pod(s), found only %v replica pod(s)" $expectedPodCount .TaskResult.listReplicaPods.actualRunningPodCount -}} - {{- .TaskResult.listReplicaPods.nodeNames | default "" | splitList " " | isLen $expectedPodCount | not | verifyErr $msg | saveIf "listReplicaPods.verifyErr" .TaskResult | noop -}} - - {{- $message := printf "jiva replica pods are in running phase and nodesNames for volume: {%s} are {%v}" .UpgradeItem.name .TaskResult.listReplicaPods.nodeNames -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-pre-check-replica-pod-phase" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-svc - namespace: default -spec: - meta: | - id: listTargetService - apiVersion: v1 - kind: Service - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelSelector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/controller-service=jiva-controller-svc - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listTargetService.items" .TaskResult | noop -}} - {{- .TaskResult.listTargetService.items | notFoundErr "volume target service not found" | saveIf "listTargetService.notFoundErr" .TaskResult | noop -}} - - {{- jsonpath .JsonResult "{.items[*].metadata.labels.openebs\\.io/version}" | trim | saveAs "listTargetService.version" .TaskResult | noop -}} - {{- .TaskResult.listTargetDeployment.version | notFoundErr "unknown openebs version" | saveIf "listTargetService.notFoundErr" .TaskResult | noop -}} - - {{- $message := "" -}} - {{- $status := "" -}} - - {{- $isVersionBase := eq .Config.baseVersion.value .TaskResult.listTargetService.version -}} - {{- $isVersionTarget := eq .Config.targetVersion.value .TaskResult.listTargetService.version -}} - {{- $isUpgradeContinue := or $isVersionBase $isVersionTarget -}} - - {{- if $isUpgradeContinue }} - {{- $message = printf "target service: {%s} is in expected version" .TaskResult.listTargetService.items -}} - {{- $status = .Config.successStatus.value -}} - {{- else }} - {{- $message = printf "target service: {%s} is not in expected version expected: {%s} but got {%s}" .TaskResult.listTargetService.items .Config.baseVersion.value .TaskResult.listTargetService.version -}} - {{- not $isUpgradeContinue | verifyErr $message | saveAs "listTargetService.verifyErr" .TaskResult | noop -}} - {{- $status = .Config.failStatus.value -}} - {{- end }} - - {{- print $isVersionBase | saveAs "listTargetService.shouldPatchCtrlSVC" .TaskResult | noop -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-svc" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-old-rs - namespace: default -spec: - meta: | - id: listTargetOldrs - apiVersion: extensions/v1beta1 - kind: ReplicaSet - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelselector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/controller=jiva-controller - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listTargetOldrs.items" .TaskResult | noop -}} - {{- .TaskResult.listTargetOldrs.items | notFoundErr "target deployment replicasets were not found" | saveIf "listTargetOldrs.notFoundErr" .TaskResult | noop -}} - - {{- $message := printf "replicaset to be deleted after patching target deployment {%s} is : {%s}" .TaskResult.listTargetDeployment.deploymentName .TaskResult.listTargetOldrs.items -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-get-list-ctrl-old-rs" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-get-list-rep-old-rs - namespace: default -spec: - meta: | - id: listOldReplicars - apiVersion: extensions/v1beta1 - kind: ReplicaSet - action: list - runNamespace: {{ .UpgradeItem.namespace }} - options: |- - labelselector: openebs.io/persistent-volume={{ .UpgradeItem.name }},openebs.io/replica=jiva-replica - post: | - {{- jsonpath .JsonResult "{.items[*].metadata.name}" | trim | saveAs "listOldReplicars.items" .TaskResult | noop -}} - {{- .TaskResult.listOldReplicars.items | notFoundErr "replica deployment replicasets were not found" | saveIf "listreplicars.notFoundErr" .TaskResult | noop -}} - - {{- $message := printf "replicasets to be deleted after patching replica deployment: {%s} is : {%s}" .TaskResult.listReplicaDeployment.deploymentName .TaskResult.listOldReplicars.items -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-get-list-rep-old-rs" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-patch-ctrl-deployment-latest-version - namespace: default -spec: - meta: | - id: patchCtrlDeploymentLatestVersion - apiVersion: extensions/v1beta1 - kind: Deployment - runNamespace: {{ .UpgradeItem.namespace }} - action: patch - objectName: {{ .TaskResult.listTargetDeployment.deploymentName }} - disable: {{ ne .TaskResult.listTargetDeployment.shouldPatchCtrlDeployment "true" }} - task: |- - type: strategic - pspec: |- - metadata: - annotations: - openebs.io/storage-class-ref: "name: {{ .TaskResult.getVolDetails.scName }}\nresourceVersion: {{ .TaskResult.getSCDetails.scResVersion }}\n" - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - spec: - template: - metadata: - annotations: - openebs.io/storage-class-ref: "name: {{ .TaskResult.getVolDetails.scName }}\nresourceVersion: {{ .TaskResult.getSCDetails.scResVersion }}\n" - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - spec: - containers: - - name: {{ .UpgradeItem.name }}-ctrl-con - image: quay.io/openebs/jiva:{{ .Config.targetVersion.value}} - - name: maya-volume-exporter - image: quay.io/openebs/m-exporter:{{ .Config.targetVersion.value}} - post: | - {{- $message := printf "controller deployment: {%s} patched with latest images version: {%s}" .TaskResult.listTargetDeployment.deploymentName .Config.targetVersion.value -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-patch-ctrl-deployment-latest-version" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} - ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-deployment-status-latest-version - namespace: default -spec: - meta: | - id: postCheckCtrlDeploymentStatusLatestVersion - apiVersion: extensions/v1beta1 - kind: Deployment - action: rolloutstatus - objectName: {{ .TaskResult.listTargetDeployment.deploymentName }} - runNamespace: {{ .UpgradeItem.namespace }} - retry: "20,20s" - post: | - {{- $rolledOut := jsonpath .JsonResult "{.isRolledout}" | trim | saveAs "postCheckCtrlDeploymentStatusLatestVersion.rolledOutStatus" .TaskResult -}} - {{- $rolloutMessage := jsonpath .JsonResult "{.message}" | trim | saveAs "postCheckCtrlDeploymentStatusLatestVersion.msg" .TaskResult -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-deployment-status-latest-version" -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - - {{- $status := "" -}} - {{- $message := "" -}} - - {{- if eq $rolledOut "true" }} - {{- $status = .Config.successStatus.value -}} - {{- $message = printf "target deployment: {%s} rollout status: success" .TaskResult.listTargetDeployment.deploymentName -}} - {{- else }} - {{- $status = .Config.failStatus.value -}} - {{- $message = printf "target deployment: {%s} rollout status: failed" .TaskResult.listTargetDeployment.deploymentName -}} - {{- "waiting for target deployment rollout" | saveAs "postCheckCtrlDeploymentStatusLatestVersion.verifyErr" .TaskResult | noop -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-deployment-image - namespace: default -spec: - meta: | - id: postCheckCtrlDeploymentImageLatestVersion - apiVersion: extensions/v1beta1 - kind: Deployment - action: get - objectName: {{ .TaskResult.listTargetDeployment.deploymentName }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- $passed := "true" -}} - - {{- $CustomJsonPath := printf "{.spec.template.spec.containers[?(@.name=='%s-ctrl-con')].image}" .UpgradeItem.name -}} - {{- $jivaCtrlImage := jsonpath .JsonResult $CustomJsonPath | trim -}} - {{- $mayaVolExporterImage := jsonpath .JsonResult "{.spec.template.spec.containers[?(@.name=='maya-volume-exporter')].image}" | trim -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-deployment-image" -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- $status := "" -}} - {{- $message := "" -}} - - {{- if contains .Config.targetVersion.value $jivaCtrlImage }} - {{- else }} - {{- $passed = "false" -}} - {{- "failed to patch jiva-controller image for controller deployment" | saveAs "postCheckctrlDeploymentImagePatch.verifyErr" .TaskResult | noop -}} - {{- end }} - {{- if contains .Config.targetVersion.value $mayaVolExporterImage }} - {{- else }} - {{- $passed = "false" -}} - {{- "failed to patch maya-volume exporter image for controller deployment" | saveAs "postCheckctrlDeploymentImagePatch.verifyErr" .TaskResult | noop -}} - {{- end }} - - {{- if eq $passed "true" -}} - {{- $status = .Config.successStatus.value -}} - {{- $message = printf "patched controller deployment: {%s} with latest images version: {%s}" .TaskResult.listTargetDeployment.deploymentName .Config.targetVersion.value -}} - {{- else -}} - {{- $status = .Config.failStatus.value -}} - {{- $message = printf "failed to patch controller deployment: {%s} with latest images version: {%s}" .TaskResult.listTargetDeployment.deploymentName .Config.targetVersion.value -}} - {{- end }} - - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-patch-rep-deployment-latest-version - namespace: default -spec: - meta: | - id: patchReplicaDeploymentLatestVersion - apiVersion: extensions/v1beta1 - kind: Deployment - runNamespace: {{ .UpgradeItem.namespace }} - action: patch - objectName: {{ .TaskResult.listReplicaDeployment.deploymentName }} - disable: {{ ne .TaskResult.listReplicaDeployment.shouldPatchRepDeployment "true" }} - task: |- - {{- $nodeNames := .TaskResult.listReplicaPods.nodeNames -}} - type: strategic - pspec: |- - metadata: - annotations: - openebs.io/storage-class-ref: "name: {{ .TaskResult.getVolDetails.scName }}\nresourceVersion: {{ .TaskResult.getSCDetails.scResVersion }}\n" - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - spec: - template: - metadata: - annotations: - openebs.io/storage-class-ref: "name: {{ .TaskResult.getVolDetails.scName }}\nresourceVersion: {{ .TaskResult.getSCDetails.scResVersion }}\n" - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - spec: - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - {{- if ne $nodeNames "" }} - {{- $nodeNamesMap := $nodeNames | split " " }} - {{- range $k, $v := $nodeNamesMap }} - - {{ $v }} - {{- end }} - {{- end }} - containers: - - name: {{ .UpgradeItem.name }}-rep-con - image: quay.io/openebs/jiva:{{ .Config.targetVersion.value}} - post: | - {{- $message := printf "replica deployment: {%s} patched with latest images version: {%s}" .TaskResult.listReplicaDeployment.deploymentName .Config.targetVersion.value -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-patch-rep-deployment-latest-version" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-post-check-rep-deployment-status-latest-version - namespace: default -spec: - meta: | - id: postCheckReplicaDeploymentStatusLatestVersion - apiVersion: extensions/v1beta1 - kind: Deployment - action: rolloutstatus - objectName: {{ .TaskResult.listReplicaDeployment.deploymentName }} - runNamespace: {{ .UpgradeItem.namespace }} - retry: "20,20s" - post: | - {{- $rolledOut := jsonpath .JsonResult "{.isRolledout}" | trim | saveAs "postCheckReplicaDeploymentStatusLatestVersion.rolledOutStatus" .TaskResult -}} - {{- $rolloutMessage := jsonpath .JsonResult "{.message}" | trim | saveAs "postCheckReplicaDeploymentStatusLatestVersion.msg" .TaskResult -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-post-check-rep-deployment-status-latest-version" -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- $status := "" -}} - {{- $message := "" -}} - - {{- if eq $rolledOut "true" }} - {{- $status = .Config.successStatus.value -}} - {{- $message = printf "replica deployment: {%s} rollout status: success" .TaskResult.listReplicaDeployment.deploymentName -}} - {{- else }} - {{- $status = .Config.failStatus.value -}} - {{- $message = printf "replica deployment: {%s} rollout status: failed" .TaskResult.listReplicaDeployment.deploymentName -}} - {{- "waiting for replica deployment rollout" | saveAs "postCheckReplicaDeploymentStatusLatestVersion.verifyErr" .TaskResult | noop -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-post-check-rep-deployment-image - namespace: default -spec: - meta: | - id: postCheckReplicaDeploymentImageLatestVersion - apiVersion: extensions/v1beta1 - kind: Deployment - action: get - objectName: {{ .TaskResult.listReplicaDeployment.deploymentName }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- $passed := "true" -}} - - {{- $CustomJsonPath := printf "{.spec.template.spec.containers[?(@.name=='%s-rep-con')].image}" .UpgradeItem.name -}} - {{- $jivaRepImage := jsonpath .JsonResult $CustomJsonPath | trim -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-post-check-rep-deployment-image" -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- $status := "" -}} - {{- $message := "" -}} - - {{- if contains .Config.targetVersion.value $jivaRepImage }} - {{- else }} - {{- $passed = "false" -}} - {{- "failed to patch jiva-replica image" | saveAs "postCheckRepDeploymentImagePatch.verifyErr" .TaskResult | noop -}} - {{- end }} - - {{- if eq $passed "true" -}} - {{- $status = .Config.successStatus.value -}} - {{- $message = printf "patched replica deployment: {%s} with latest images version: {%s}" .TaskResult.listReplicaDeployment.deploymentName .Config.targetVersion.value -}} - {{- else -}} - {{- $status = .Config.failStatus.value -}} - {{- $message = printf "patched replica deployment: {%s} with latest images version: {%s}" .TaskResult.listReplicaDeployment.deploymentName .Config.targetVersion.value -}} - {{- end }} - - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-patch-ctrl-svc - namespace: default -spec: - meta: | - id: patchCtrlSVC - apiVersion: v1 - kind: Service - action: patch - objectName: {{ .TaskResult.listTargetService.items }} - runNamespace: {{ .UpgradeItem.namespace }} - disable: {{ ne .TaskResult.listTargetService.shouldPatchCtrlSVC "true" }} - task: |- - type: merge - pspec: |- - metadata: - annotations: - openebs.io/storage-class-ref: "name: {{ .TaskResult.getVolDetails.scName }}\nresourceVersion: {{ .TaskResult.getSCDetails.scResVersion }}\n" - labels: - openebs.io/version: {{ .Config.targetVersion.value }} - post: | - {{- $message := printf "patched controller service: {%s} with required labels and annotations" .TaskResult.listTargetService.items -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-patch-ctrl-svc" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-svc - namespace: default -spec: - meta: | - id: postCheckCtrlSVC - apiVersion: v1 - kind: Service - action: get - objectName: {{ .TaskResult.listTargetService.items }} - runNamespace: {{ .UpgradeItem.namespace }} - post: | - {{- jsonpath .JsonResult "{.metadata.labels.openebs\\.io/version}" | trim | saveAs "postCheckCtrlSVC.version" .TaskResult | noop -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-post-check-ctrl-svc" -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- $status := "" -}} - {{- $message := "" -}} - - {{- if eq .Config.targetVersion.value .TaskResult.postCheckCtrlSVC.version -}} - {{- $status = .Config.successStatus.value -}} - {{- $message = printf "patched version label on controller service: {%s}" .TaskResult.listTargetService.items -}} - {{- else -}} - {{- $status = .Config.failStatus.value -}} - {{- $message = "failed to patch version label on controller service: {%s}" .TaskResult.listTargetService.items -}} - {{- "labels not patched on jiva controller service" | saveAs "postCheckCtrlSVC.verifyErr" .TaskResult | noop -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-delete-old-ctrl-rs - namespace: default -spec: - meta: | - {{- $rslist := .TaskResult.listTargetOldrs.items | default "" | splitList " " -}} - id: deleteCtrlReplicaSet - apiVersion: extensions/v1beta1 - kind: ReplicaSet - runNamespace: {{ .UpgradeItem.namespace }} - action: delete - disable: {{ ne .TaskResult.listTargetDeployment.shouldPatchCtrlDeployment "true" }} - repeatWith: - metas: - {{- range $k, $rs := $rslist }} - - objectName: {{ $rs }} - {{- end }} - post: | - {{- $message := printf "deleted older controller replicasets: {%s}" .TaskResult.listTargetOldrs.items -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-delete-old-ctrl-rs" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-delete-old-rep-rs - namespace: default -spec: - meta: | - {{- $rslist := .TaskResult.listOldReplicars.items | default "" | splitList " " -}} - id: deleteReplicaReplicaSet - apiVersion: extensions/v1beta1 - kind: ReplicaSet - runNamespace: {{ .UpgradeItem.namespace }} - action: delete - disable: {{ ne .TaskResult.listReplicaDeployment.shouldPatchRepDeployment "true" }} - repeatWith: - metas: - {{- range $k, $rs := $rslist }} - - objectName: {{ $rs }} - {{- end }} - post: | - {{ $retries := 0 -}} - {{- $message := printf "deleted older replica replicasets: {%s}" .TaskResult.listTargetOldrs.items -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-delete-old-rep-rs" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-list-volumesnapshot - namespace: default -spec: - meta: | - id: listVolumeSnapshotDetails - apiVersion: v1 - kind: VolumeSnapshot - action: list - options: |- - labelSelector: SnapshotMetadata-PVName={{ .UpgradeItem.name }} - post: | - {{- jsonpath .JsonResult "{.items[*].spec.snapshotDataName}" | trim | saveAs "listVolumeSnapshotDetails.snapshotDataNames" .TaskResult | noop -}} - - {{- .TaskResult.listVolumeSnapshotDetails.snapshotDataNames | toString | saveAs "listVolumeSnapshotDetails.volumeSnapshotData" .TaskResult | noop -}} - {{- if eq .TaskResult.listVolumeSnapshotDetails.volumeSnapshotData "" }} - {{- printf "false" | saveAs "listVolumeSnapshotDetails.isExist" .TaskResult }} - {{- else }} - {{- printf "true" | saveAs "listVolumeSnapshotDetails.isExist" .TaskResult }} - {{- end }} - - {{- $message := printf "details of volumesnapshotdata {%s} for volume: {%s}" .TaskResult.listVolumeSnapshotDetails.snapshotDataNames .UpgradeItem.name -}} - {{- $status :=.Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-list-volumesnapshot" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-patch-volumesnapshotdatadata - namespace: default -spec: - meta: | - {{- $snapshotDataList := .TaskResult.listVolumeSnapshotDetails.snapshotDataNames | default "" | splitList " " -}} - id: patchSnapData - apiVersion: v1 - kind: VolumeSnapshotData - action: patch - disable: {{ eq .TaskResult.listVolumeSnapshotDetails.isExist "false" }} - repeatWith: - metas: - {{- range $k, $snapData := $snapshotDataList }} - - objectName: {{ $snapData }} - {{- end }} - task: |- - type: merge - pspec: |- - spec: - openebsVolume: - capacity: {{ .TaskResult.getVolDetails.pvCapacity }} - post: | - {{- $message := printf "volume snapshotdatas are patched with capacity: {%s}" .TaskResult.getVolDetails.pvCapacity -}} - {{- $status := .Config.successStatus.value -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-patch-volumesnapshotdatadata" -}} - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} ---- -apiVersion: openebs.io/v1alpha1 -kind: RunTask -metadata: - name: upgrade-jiva-volume-0.8.2-0.9.0-post-check-volumesnapshotdata - namespace: default -spec: - meta: | - {{- $snapshotDataList := .TaskResult.listVolumeSnapshotDetails.snapshotDataNames | default "" | splitList " " -}} - id: getVolumeSnapshotDataDetails - apiVersion: v1 - kind: VolumeSnapshotData - action: get - disable: {{ eq .TaskResult.listVolumeSnapshotDetails.isExist "false" }} - repeatWith: - metas: - {{- range $k, $snapData := $snapshotDataList }} - - objectName: {{ $snapData }} - {{- end }} - post: | - {{- jsonpath .JsonResult "{.spec.openebsVolume.capacity}" | trim | saveAs "getVolumeSnapshotDataDetails.capacity" .TaskResult | noop -}} - - {{- $URName := upgradeResultWithTaskOwnerName .UpgradeItem.upgradeResultName -}} - {{- $URNamespace := upgradeResultWithTaskOwnerNamespace .UpgradeItem.upgradeResultNamespace -}} - {{- $taskName := upgradeResultWithTaskName "upgrade-jiva-volume-0.8.2-0.9.0-post-check-volumesnapshotdata" -}} - {{- $taskStartTime := upgradeResultWithTaskStartTime now -}} - {{- $taskEndTime := upgradeResultWithTaskEndTime now -}} - - {{- $status := "" -}} - {{- $message := "" -}} - - {{- if eq .TaskResult.getVolumeSnapshotDataDetails.capacity .TaskResult.getVolDetails.pvCapacity }} - {{- $status = .Config.successStatus.value -}} - {{- $message = printf "patched volume snapshot data successfully" -}} - {{- else }} - {{- $status = .Config.failStatus.value -}} - {{- $message = printf "failed to patch volume snapshot data" -}} - {{- end }} - - {{- $taskStatus := upgradeResultWithTaskStatus $status -}} - {{- $taskMessage := upgradeResultWithTaskMessage $message -}} - {{- upgradeResultUpdateTasks $taskStartTime $URName $URNamespace $taskName $taskStatus $taskMessage $taskEndTime -}} diff --git a/k8s/upgrades/0.8.2-0.9.0/jiva/volume-upgrade-job.yaml b/k8s/upgrades/0.8.2-0.9.0/jiva/volume-upgrade-job.yaml deleted file mode 100644 index c8ee60043a..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/jiva/volume-upgrade-job.yaml +++ /dev/null @@ -1,50 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: jiva-upgrade-config - namespace: default -data: - upgrade: | - casTemplate: jiva-volume-update-0.8.2-0.9.0 - # Enter the jiva volume name(pv name) and namespace to upgrade to 0.9.0 - - # Command to get volume name: kubectl get pv and update the name and - # namespace values - resources: - - name: pvc-3d290e5f-7ada-11e9-b8a5-54e1ad4a9dd4 - kind: jiva-volume - namespace: default ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: jiva-volume-upgrade -spec: - template: - spec: - serviceAccountName: super-admin - containers: - - name: upgrade - image: openebs/m-upgrade:0.9.0 - volumeMounts: - - name: config - mountPath: /etc/config - readOnly: true - env: - - name: OPENEBS_IO_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: OPENEBS_IO_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: OPENEBS_IO_POD_UID - valueFrom: - fieldRef: - fieldPath: metadata.uid - volumes: - - name: config - configMap: - name: jiva-upgrade-config - restartPolicy: Never diff --git a/k8s/upgrades/0.8.2-0.9.0/rbac.yaml b/k8s/upgrades/0.8.2-0.9.0/rbac.yaml deleted file mode 100644 index ccc2703b26..0000000000 --- a/k8s/upgrades/0.8.2-0.9.0/rbac.yaml +++ /dev/null @@ -1,35 +0,0 @@ -# ServiceAccount is used to give permissions to m-upgrade -# job for accessing various resources. -apiVersion: v1 -kind: ServiceAccount -metadata: - name: super-admin - namespace: default ---- -# ClusterRole create rules to access various apis, resources -# and various operations on the resources. -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: super-admin -rules: -- apiGroups: ["*"] - resources: ["*"] - verbs: ["*"] ---- -# ClusterRoleBinding binds the role defined to users, apps. -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: super-admin -subjects: -- kind: ServiceAccount - name: super-admin - namespace: default -- kind: User - name: system:serviceaccount:default:default - apiGroup: rbac.authorization.k8s.io -roleRef: - kind: ClusterRole - name: super-admin - apiGroup: rbac.authorization.k8s.io diff --git a/k8s/upgrades/0.9.0-1.0.0/README.md b/k8s/upgrades/0.9.0-1.0.0/README.md deleted file mode 100644 index a42a5807ee..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/README.md +++ /dev/null @@ -1,176 +0,0 @@ -# UPGRADE FROM OPENEBS 0.9.0 TO 1.0.0 - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.9.0 to 1.0.0 - -The upgrade of OpenEBS is a three step process: -- *Step 1* - Prerequisites -- *Step 2* - Upgrade the OpenEBS Operator -- *Step 3* - Upgrade the OpenEBS Pools and Volumes from previous versions (0.9.0) - -#### Note: It is mandatory to make sure to that all OpenEBS control plane components and volumes are running with version 0.9.0 before the upgrade. - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: Storage Engine pods like cStor or Jiva controller(aka target) & replica pods* - -### Download the upgrade scripts - -The easiest way to get all the upgrade scripts is via git clone. - -```sh -$ mkdir upgrade-openebs -$ cd upgrade-openebs -$ git clone https://github.com/openebs/openebs.git -$ cd openebs/k8s/upgrades/0.9.0-1.0.0/ -``` - -## Step 1: Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* -- If OpenEBS has been deployed using openebs helm charts, it has to be in chart version `0.9.2` . Run `helm list` to verify the chart version. - If not, we have to update openebs chart version using below commands. - - - Firstly we have to delete the `admission-server` secret, which will be deployed again once we upgrade charts to `0.9.2` version using below command: - ```sh - $ kubectl delete secret admission-server-certs -n openebs - ``` - - - Upgrade OpenEBS chart version to 0.9.2 using below command: - ```sh - $ helm repo update - - $ helm upgrade stable/openebs --version 0.9.2 - ``` - - - Run `helm list` to verify deployed OpenEBS chart version: - ```sh - $ helm list - NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE - openebs 3 Mon Jun 24 20:57:05 2019 DEPLOYED openebs-0.9.2 0.9.0 openebs - ``` - - - Before proceeding with below steps please make sure the daemonset `DESIRED` count is equal to `CURRENT` count. - ```sh - $ kubectl get ds openebs-ndm -n - NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE - openebs-ndm 3 3 3 3 3 7m6s - ``` - Sometimes, the `DESIRED` count may not be equal to the `CURRENT` count. This may happen due to following cases: - - If any NodeSelector has been used to deploy openebs related pods. - - Master or any Node has been tainted in k8s cluster. - - - Run below command to update OpenEBS control plane components labels. - ```sh - $ ./pre-upgrade.sh - ``` - `` is the namespace where OpenEBS control plane components are installed. - `` provide mode as helm if OpenEBS is installed via helm (or) provide - mode as operator if OpenEBS is installed via operator yaml - -Note: - - No new spc should be created after this step until the upgrade is complete. If created, execute `pre-upgrade.sh` again. - - It is mandatory to make sure that all OpenEBS control plane components are running at version 0.9.0 before the upgrade - - -## Step 2: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments: - -Upgrade steps vary depending on the way OpenEBS was installed. Below are the possible ways: - -#### Upgrade using kubectl (using openebs-operator.yaml): - -**Use this mode of upgrade only if OpenEBS was installed using openebs-operator.yaml.** - -**The sample steps below will work if you have installed OpenEBS without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you will have to download the 1.0.0 openebs-operator.yaml and customize it again** - -``` -#Upgrade to 1.0.0 OpenEBS Operator -$ kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.0.0.yaml -``` - -#### Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,): - -**The sample steps below will work if you have installed openebs with default values provided by stable/ openebs helm chart.** - -Before upgrading using helm, please review the default values available with latest stable/openebs chart. (https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to update OpenEBS. [More](https://hub.helm.sh/charts/stable/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values stable/openebs --version 1.0.0 - ``` -- If not, customize the values into your copy (say custom-values.yaml), by copying the content from above default yamls and edit the values to suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade stable/openebs --version 1.0.0 -f custom-values.yaml` - ``` - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 1.0.0 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 1.0.0: -- openebs-charts [PR####](https://github.com/openebs/openebs/pull/2352) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 3: Upgrade the OpenEBS Pools and Volumes - -Even after the OpenEBS Operator has been upgraded to 1.0.0, the Storage Pools and Volumes (both jiva and cStor) will continue to work with older versions. Use the following steps in the same order to upgrade Pools and Volumes. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/troubleshooting.html#recovery-readonly-when-kubelet-is-container -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -### Upgrade the Jiva based OpenEBS PV - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -``` -$ cd jiva -$ ./jiva_volume_upgrade.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -```sh -NAME AGE -cstor-sparse-pool 24m -``` - -```sh -$ cd cstor -$ ./cstor_pool_upgrade.sh cstor-sparse-pool -``` -`` is the namespace where OpenEBS control plane components are installed. - -Make sure that this step completes successfully before proceeding to next step. - - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -```sh -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -``` - -```sh -$ cd cstor -$ ./cstor_volume_upgrade.sh pvc-1085415d-f84c-11e8-aadf-42010a8000bb -``` -`` is the namespace where OpenEBS control plane components are installed. diff --git a/k8s/upgrades/0.9.0-1.0.0/bdc-create.tpl.json b/k8s/upgrades/0.9.0-1.0.0/bdc-create.tpl.json deleted file mode 100644 index 00402cb9d9..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/bdc-create.tpl.json +++ /dev/null @@ -1,26 +0,0 @@ -{ - "apiVersion": "openebs.io/v1alpha1", - "kind": "BlockDeviceClaim", - "metadata": { - "labels": { - "openebs.io/storage-pool-claim": "@spc_name@" - }, - "name": "@bdc_name@", - "namespace": "@bdc_namespace@", - "ownerReferences": [ - { - "apiVersion": "openebs.io/v1alpha1", - "blockOwnerDeletion": true, - "controller": true, - "kind": "StoragePoolClaim", - "name": "@spc_name@", - "uid": "@spc_uid@" - } - ] - }, - "spec": { - "blockDeviceName": "@bd_name@", - "deviceType": "", - "hostName": "@node_name@" - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/blockdeviceclaim_crd.yaml b/k8s/upgrades/0.9.0-1.0.0/blockdeviceclaim_crd.yaml deleted file mode 100644 index f736a528d1..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/blockdeviceclaim_crd.yaml +++ /dev/null @@ -1,36 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - # name must match the spec fields below, and be in the form: . - name: blockdeviceclaims.openebs.io -spec: - # group name to use for REST API: /apis// - group: openebs.io - # version name to use for REST API: /apis// - version: v1alpha1 - # either Namespaced or Cluster - scope: Namespaced - names: - # plural name to be used in the URL: /apis/// - plural: blockdeviceclaims - # singular name to be used as an alias on the CLI and for display - singular: blockdeviceclaim - # kind is normally the CamelCased singular type. Your resource manifests use this. - kind: BlockDeviceClaim - # shortNames allow shorter string to match your resource on the CLI - shortNames: - - bdc - additionalPrinterColumns: - - JSONPath: .spec.blockDeviceName - name: BlockDeviceName - description: Identifies the block device associated with the claim - type: string - - JSONPath: .status.phase - name: Phase - description: Identifies the phase of block device claim - type: string - - JSONPath: .metadata.creationTimestamp - name: Age - type: date ---- diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/csp-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/cstor/csp-patch.tpl.json deleted file mode 100644 index fc08b72ea7..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/csp-patch.tpl.json +++ /dev/null @@ -1,18 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@", - "openebs.io/cas-type": "cstor" - } - }, - "spec": { - "disks": null, - "group": [ - { - "blockDevice": [ - @blockdevice_list@ - ] - } - ] - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-pool-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-pool-patch.tpl.json deleted file mode 100644 index 0ddcbf6306..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-pool-patch.tpl.json +++ /dev/null @@ -1,32 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - } - }, - "spec": { - "template": { - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - } - }, - "spec": { - "containers": [ - { - "name": "cstor-pool", - "image": "quay.io/openebs/cstor-pool:@pool_version@" - }, - { - "name": "cstor-pool-mgmt", - "image": "quay.io/openebs/cstor-pool-mgmt:@pool_version@" - }, - { - "name": "maya-exporter", - "image": "quay.io/openebs/m-exporter:@pool_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-target-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-target-patch.tpl.json deleted file mode 100644 index 4dc3d88fa8..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-target-patch.tpl.json +++ /dev/null @@ -1,32 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "containers": [ - { - "name": "cstor-istgt", - "image": "quay.io/openebs/cstor-istgt:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - }, - { - "name": "cstor-volume-mgmt", - "image": "quay.io/openebs/cstor-volume-mgmt:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-target-svc-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-target-svc-patch.tpl.json deleted file mode 100644 index c39df1ba91..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-target-svc-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-volume-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-volume-patch.tpl.json deleted file mode 100644 index c39df1ba91..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-volume-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-volume-replica-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-volume-replica-patch.tpl.json deleted file mode 100644 index 885b959001..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor-volume-replica-patch.tpl.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "metadata": { - "finalizers": [ - "cstorvolumereplica.openebs.io/finalizer" - ], - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor_pool_upgrade.sh b/k8s/upgrades/0.9.0-1.0.0/cstor/cstor_pool_upgrade.sh deleted file mode 100755 index c362e4af81..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor_pool_upgrade.sh +++ /dev/null @@ -1,280 +0,0 @@ -#!/usr/bin/env bash - -########################################################################### -# STEP: Get SPC name and namespace where OpenEBS is deployed as arguments # -# # -# NOTES: Obtain the pool deployments to perform upgrade operation # -########################################################################### - -upgrade_version="1.0.0" -current_version="0.9.0" - -source util.sh - -function error_msg() { - echo -n "Failed to upgrade pool $spc. Please make sure that the pool $spc " - echo -n "upgrade should be successful before continuing for next step. " - echo -n "Contact OpenEBS team over slack for any further help." -} - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the SPC name using: kubectl get spc" - echo " Get the namespace where pool pods" - echo " corresponding to SPC are deployed" - exit 1 -} - -function pre_check() { - local ns=$1 - local pod_version="" - ## name=maya-apiserver label is common for both helm and operator yaml - maya_pod_name=$(kubectl get pod -n $ns \ - -l name=maya-apiserver \ - -o jsonpath='{.items[0].metadata.name}') - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get maya apiserver pod name | Exit code: $rc"; exit 1; error_msg; fi - pod_version=$(verify_openebs_version "pod" $maya_pod_name $ns) - rc=$? - if [ $rc -ne 0 ]; then - error_msg - exit 1 - fi - if [ $pod_version != $upgrade_version ]; then - echo "Pre-checks failed. Please upgrade OpenEBS control components before upgrading the pool" - error_msg - exit 1 - fi -} - -function make_spc_blockdevice_list() { - local spc_bd_list=$1 - local disk_name=$2 - ## For sparse block device name looks like sparse-1234567 - local bd_name=$(echo $disk_name | sed 's|disk-|blockdevice-|g') - - if [ -z $spc_bd_list ]; then - echo "\"$bd_name\"" - else - echo "$spc_bd_list,\"$bd_name\"" - fi -} - -## patch_blockdevice_list_for_spc to patch block device changes related to spc -function patch_blockdevice_list_for_spc() { - local spc_name=$1 - - local spc_disk_list=$(kubectl get spc $spc_name \ - -o jsonpath='{.spec.disks.diskList}' | \ - tr "[]" " ") - local spc_bd_list="" - local no_of_blockdevices=$(echo $spc_disk_list | wc -w) - - ##Patch SPC only if it is manual provisioning - if [ $no_of_blockdevices != 0 ]; then - for disk_name in $spc_disk_list; do - spc_bd_list=$(make_spc_blockdevice_list "$spc_bd_list" "$disk_name") - done - sed "s|@blockdevice_list@|$spc_bd_list|g" spc-patch.tpl.json > spc-patch.json - - kubectl patch spc $spc_name -p "$(cat spc-patch.json)" --type=merge - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to patch spc: $spc_name with block device information | Exit Code: $rc" - error_msg - rm spc-patch.json - exit 1 - fi - - rm spc-patch.json - fi -} - -## make_csp_disk_list will return the jsonpath required to update the csp -function make_csp_disk_list() { - local csp_disk_list=$1 - local disk_device_id=$2 - local disk_name=$3 - local bd_name=$(echo $disk_name | sed 's|disk-|blockdevice-|g') - if [ -z "$csp_disk_list" ]; then - echo "{\"deviceID\": \"$disk_device_id\",\"inUseByPool\": true,\"name\": \"$bd_name\"}" - else - echo "$csp_disk_list,{\"deviceID\": \"$disk_device_id\",\"inUseByPool\": true,\"name\": \"$bd_name\"}" - fi -} - -######################################################################## -# # -# Starting point # -# # -######################################################################## -if [ "$#" -ne 2 ]; then - usage -fi - -spc=$1 -ns=$2 -declare -A csp_blockdevice_list - -## Assumption OpenEBS control plane components will be in same namespace where -## pool pods are running -pre_check $ns - -### Get the deployment pods which are in not running state that are related to provided spc ### -pending_pods=$(kubectl get pod -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath='{.items[?(@.status.phase!="Running")].metadata.name}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get pool pods related to spc: $spc | Exit Code: $rc"; error_msg; exit 1; fi - - -## If any deployments pods are in not running state then exit the upgrade process ### -if [ $(echo $pending_pods | wc -w) -ne 0 ]; then - echo "To continue with upgrade script make sure all the pool deployment pods corresponding to $spc must be in running state" - error_msg - exit 1 -fi - -patch_blockdevice_list_for_spc $spc - -### Get the csp list which are related to the given spc ### -csp_list=$(get_csp_list $spc) - -#### Get required info from current csp and use the info while upgrading #### -for csp_name in `echo $csp_list | tr ":" " "`; do - ## Check CSP version ## - version=$(verify_openebs_version "csp" $csp_name $ns) - rc=$? - if [ $rc -ne 0 ]; then - error_msg - exit 1 - fi - if [ $version == $upgrade_version ]; then - continue - fi - - ## Get disk info from corresponding sp ## - sp_name=$(kubectl get sp \ - -l openebs.io/cstor-pool=$csp_name \ - -o jsonpath="{.items[*].metadata.name}") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get sp related to csp: $csp_name | Exit Code: $rc"; error_msg; exit 1; fi - - ## Below command will give the output in form of - ## [disk-1 disk-2 disk-3 disk-4] - sp_disk_list=$(kubectl get sp $sp_name \ - -o jsonpath="{.spec.disks.diskList}" | \ - tr "[]" " ") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get disks related to sp: $sp_name | Exit Code: $rc"; error_msg; exit 1; fi - - ## Below snippet will get related info regarding block device and format - ## information in below format and save it to corresponding csp name - ## "group": [ - ## { - ## "blockDevice": [ - ## { - ## "deviceID": "/var/openebs/sparse/6-ndm-sparse.img", - ## "inUseByPool": true, - ## "name": "sparse-177b6bc2ae2dd332c7a384a02179368b" - ## }, - ## { - ## "deviceID": "/var/openebs/sparse/9-ndm-sparse.img", - ## "inUseByPool": true, - ## "name": "sparse-2239d60eb46b3c26b0428fae4d15c88a" - ## } - ## ] - ## } - ## ] - csp_disk_list="" - for disk_name in $sp_disk_list; do - ## Assuming device Id present in first index - device_id=$(kubectl get disk $disk_name -o jsonpath="{.spec.devlinks[0].links[0]}") - if [ -z $device_id ]; then - device_id=$(kubectl get disk $disk_name -o jsonpath="{.spec.path}") - fi - csp_disk_list=$(make_csp_disk_list "$csp_disk_list" "$device_id" "$disk_name") - done - csp_blockdevice_list[$csp_name]=$csp_disk_list -done - -echo "Patching Pool Deployment with new image" -for csp_name in `echo $csp_list | tr ":" " "`; do - ## Get the pool deployment corresponding to csp - pool_dep=$(kubectl get deploy -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{.items[?(@.metadata.labels.openebs\.io/cstor-pool=='$csp_name')].metadata.name}") - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to get deployment related to csp: $csp_name | Exit Code: $rc" - error_msg - exit 1 - fi - - ## We are patching csp after deployment so checking csp version is good - version=$(verify_openebs_version "csp" $csp_name $ns) - rc=$? - if [ $rc -ne 0 ]; then - error_msg - exit 1 - fi - if [ $version == $upgrade_version ]; then - continue - fi - - version=$(verify_openebs_version "deploy" $pool_dep $ns) - rc=$? - if [ $rc -ne 0 ]; then - error_msg - exit 1 - fi - if [ $version == $current_version ]; then - ## Get the replica set corresponding to the deployment ## - pool_rs=$(kubectl get rs -n $ns \ - -l openebs.io/cstor-pool=$csp_name -o jsonpath='{.items[0].metadata.name}') - echo "$pool_dep -> rs is $pool_rs" - - - ## Modifies the cstor-pool-patch template with the original values ## - sed "s/@pool_version@/$upgrade_version/g" cstor-pool-patch.tpl.json > cstor-pool-patch.json - - ## Patch the deployment file ### - kubectl patch deployment --namespace $ns $pool_dep -p "$(cat cstor-pool-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: Failed to patch $pool_dep"; error_msg; rm cstor-pool-patch.json ;exit 1; fi - rollout_status=$(kubectl rollout status --namespace $ns deployment/$pool_dep) - rc=$?; if [[ ($rc -ne 0) || ! ($rollout_status =~ "successfully rolled out") ]]; - then echo "ERROR: Failed to rollout status for $pool_dep error: $rc"; error_msg; rm cstor-pool-patch.json; exit 1; fi - - ## Deleting the old replica set corresponding to deployment - kubectl delete rs $pool_rs --namespace $ns - fi - - ## Remove the reconcile.openebs.io/disable annotation and patch with block - ## device information to csp - sed "s|@blockdevice_list@|${csp_blockdevice_list[$csp_name]}|g" csp-patch.tpl.json | sed "s|@pool_version@|$upgrade_version|g" > csp-patch.json - kubectl patch csp $csp_name -p "$(cat csp-patch.json)" --type=merge - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to patch spec and annotation for csp: $csp | Exit Code: $rc" - error_msg - rm csp-patch.json - rm cstor-pool-patch.json - exit 1 - fi - - ## Cleaning the temporary patch file - rm cstor-pool-patch.json - rm csp-patch.json -done - -## Delete sp realated to the spc -kubectl delete sp -l openebs.io/cas-type=cstor,openebs.io/storage-pool-claim=$spc -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete sp related to spc: $spc_name Exit Code: $rc"; error_msg; exit 1; fi - -echo "Upgrade steps are done on pool $spc" - -./verify_pool_upgrade.sh $spc $ns -rc=$? -if [ $rc -eq 0 ]; then - echo "Verification of pool $spc upgrade is successful. Please run volume upgrade scripts" -fi diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor_volume_upgrade.sh b/k8s/upgrades/0.9.0-1.0.0/cstor/cstor_volume_upgrade.sh deleted file mode 100755 index f3b847c048..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/cstor_volume_upgrade.sh +++ /dev/null @@ -1,247 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ -upgrade_version="1.0.0" -current_version="0.9.0" - -source util.sh - -function error_msg() { - echo "Failed to upgrade volume $pv in namespace $ns. Please make sure that volume upgrade should be successful before moving to application checks" -} - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - echo " Get the namespace where openebs" - echo " pods are installed" - exit 1 -} - -function pre_check() { - local pv=$1 - local ns=$2 - local pod_version="" - local csp_name="" - local pod_name="" - local csp_list="" - local spc_name="" - csp_list=$(kubectl get cvr -n $ns \ - -l openebs.io/persistent-volume=$pv \ - -o jsonpath="{range .items[*]}{@.metadata.labels.cstorpool\.openebs\.io/name};{end}" | tr ";" " ") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get csp list | Exit code: $rc"; exit 1; error_msg; fi - - for csp_name in $csp_list; do - pod_name=$(kubectl get pod -n $ns \ - -l app=cstor-pool,openebs.io/cstor-pool=$csp_name \ - -o jsonpath="{.items[0].metadata.name}") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get pool pod name of csp: $csp_name | Exit code: $rc"; exit 1; error_msg; fi - - pod_version=$(verify_openebs_version "pod" "$pod_name" "$ns") - rc=$? - if [ $rc -ne 0 ]; then - error_msg - exit 1 - fi - if [ $pod_version != $upgrade_version ]; then - spc_name=$(kubectl get csp $csp_name \ - -o jsonpath="{.metadata.labels.openebs\.io/storage-pool-claim}") - echo "Pre-checks failed. Please upgrade pool: $spc_name before upgrading the volume $pv in namespace $ns" - error_msg - exit 1 - fi - done -} - -if [ "$#" -ne 2 ]; then - usage -fi - -pv=$1 -ns=$2 - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found"; error_msg; exit 1; -fi - -# Check if CASType is cstor -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}"` -if [ $cas_type != "cstor" ]; then - echo "Cstor volume not found";exit 1; -elif [ $cas_type == "cstor" ]; then - echo "$pv is a cstor volume" -else - echo "Volume is neither cstor or cstor"; exit 1; -fi - -pvc_name=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` -pvc_namespace=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc", # -# # -# c-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-target # -################################################################# - -c_dep=$(kubectl get deploy -n $ns \ - -l openebs.io/persistent-volume=$pv,openebs.io/target=cstor-target \ - -o jsonpath="{.items[*].metadata.name}") -c_svc=$(kubectl get svc -n $ns \ - -l openebs.io/persistent-volume=$pv,openebs.io/target-service=cstor-target-svc \ - -o jsonpath="{.items[*].metadata.name}") -c_vol=$(kubectl get cstorvolumes \ - -l openebs.io/persistent-volume=$pv -n $ns \ - -o jsonpath="{.items[*].metadata.name}") -c_replicas=$(kubectl get cvr -n $ns \ - -l openebs.io/persistent-volume=$pv \ - -o jsonpath="{range .items[*]}{@.metadata.name};{end}" | tr ";" "\n") - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted after upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs=$(kubectl get rs -n $ns -o name -l openebs.io/persistent-volume=$pv | cut -d '/' -f 2) - - -# Check if openebs resources exist and provisioned version is 0.9 - -if [[ -z $c_rs ]]; then - echo "Target Replica set not found"; error_msg; exit 1; -fi - -if [[ -z $c_dep ]]; then - echo "Target deployment not found"; error_msg; exit 1; -fi - -if [[ -z $c_svc ]]; then - echo "Target svc not found"; error_msg; exit 1; -fi - -if [[ -z $c_vol ]]; then - echo "CstorVolumes CR not found"; error_msg; exit 1; -fi - -if [[ -z $c_replicas ]]; then - echo "Cstor Volume Replica CR not found"; error_msg; exit 1; -fi - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" != "$current_version" ]] && [[ "$controller_version" != "$upgrade_version" ]] ; then - echo "Current cstor target deloyment $c_dep version is not $current_version or $upgrade_version" - error_msg - exit 1 -fi - -controller_service_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_service_version" != "$current_version" ]] && [[ "$controller_service_version" != "$upgrade_version" ]]; then - echo "Current cstor target service $c_svc version is not $current_version or $upgrade_version" - error_msg - exit 1 -fi - -cstor_volume_version=`kubectl get cstorvolumes $c_vol -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$cstor_volume_version" != "$current_version" ]] && [[ "$cstor_volume_version" != "$upgrade_version" ]]; then - echo "Current cstor volume $c_vol version is not $current_version or $upgrade_version"; error_msg; exit 1; -fi - -for replica in $c_replicas -do - replica_version=`kubectl get cvr $replica -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` - if [[ "$replica_version" != "$current_version" ]] && [[ "$replica_version" != "$upgrade_version" ]]; then - echo "CStor volume replica $replica version is not $current_version"; error_msg; exit 1; - fi -done - - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -sed "s/@target_version@/$upgrade_version/g" cstor-target-patch.tpl.json > cstor-target-patch.json -sed "s/@target_version@/$upgrade_version/g" cstor-target-svc-patch.tpl.json > cstor-target-svc-patch.json -sed "s/@target_version@/$upgrade_version/g" cstor-volume-patch.tpl.json > cstor-volume-patch.json -sed "s/@target_version@/$upgrade_version/g" cstor-volume-replica-patch.tpl.json> cstor-volume-replica-patch.json - -################################################################################# -# STEP: Patch OpenEBS volume deployments (cstor-target, cstor-svc) # -################################################################################# - - -# #### PATCH TARGET DEPLOYMENT #### - -if [[ "$controller_version" != "$upgrade_version" ]]; then - echo "Upgrading Target Deployment to $upgrade_version" - - kubectl patch deployment --namespace $ns $c_dep -p "$(cat cstor-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstor target deployment $c_dep | Exit code: $rc"; error_msg; exit 1; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete cstor replica set $c_rs | Exit code: $rc"; error_msg; exit 1; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || ! ($rollout_status =~ "successfully rolled out") ]]; - then echo "Failed to rollout for deployment $c_dep | Exit code: $rc"; error_msg; exit 1; fi -else - echo "Target deployment $c_dep is already at $upgrade_version" -fi - -# #### PATCH TARGET SERVICE #### -if [[ "$controller_service_version" != "$upgrade_version" ]]; then - echo "Upgrading Target Service to $upgrade_version" - kubectl patch service --namespace $ns $c_svc -p "$(cat cstor-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch service $svc | Exit code: $rc"; error_msg; exit 1; fi -else - echo "Target service $c_svc is already at $upgrade_version" -fi - -# #### PATCH CSTOR Volume CR #### -if [[ "$cstor_volume_version" != "$upgrade_version" ]]; then - echo "Upgrading cstor volume CR to $upgrade_version" - kubectl patch cstorvolume --namespace $ns $c_vol -p "$(cat cstor-volume-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstor volumes CR $c_vol | Exit code: $rc"; error_msg; exit 1; fi -else - echo "CStor volume CR $c_vol is already at $upgrade_version" -fi - -# #### PATCH CSTOR Volume Replica CR #### - -for replica in $c_replicas -do - if [[ "`kubectl get cvr $replica -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'`" != "$upgrade_version" ]]; then - echo "Upgrading cstor volume replica $replica to $upgrade_version" - kubectl patch cvr $replica --namespace $ns -p "$(cat cstor-volume-replica-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch CstorVolumeReplica $replica | Exit code: $rc"; error_msg; exit 1; fi - echo "Successfully updated replica: $replica" - else - echo "cstor replica $replica is already at $upgrade_version" - fi -done - -echo "Clearing temporary files" -rm cstor-target-patch.json -rm cstor-target-svc-patch.json -rm cstor-volume-patch.json -rm cstor-volume-replica-patch.json - -echo "Upgrade steps are done on volume $pv" - -./verify_volume_upgrade.sh $pv $ns -rc=$? -if [ $rc -eq 0 ]; then - echo "Verification of volume $pv upgrade is successful. Please run your application checks" -fi diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/spc-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/cstor/spc-patch.tpl.json deleted file mode 100644 index 5ef737d904..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/spc-patch.tpl.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "metadata": { - "annotations": { - "reconcile.openebs.io/disable": "false" - } - }, - "spec": { - "disks": null, - "blockDevices": { - "blockDeviceList": [ - @blockdevice_list@ - ] - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/util.sh b/k8s/upgrades/0.9.0-1.0.0/cstor/util.sh deleted file mode 100755 index 8f78f782e8..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/util.sh +++ /dev/null @@ -1,51 +0,0 @@ -#!/usr/bin/env bash - -##Checking the version of OpenEBS #### -function verify_openebs_version() { - local resource=$1 - local name_res=$2 - local ns=$ns - local openebs_version=$(kubectl get $resource $name_res -n $ns \ - -o jsonpath="{.metadata.labels.openebs\.io/version}") - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to get version from $resource: $name_res | Exit Code: $rc" - error_msg - exit 1 - fi - - if [[ $openebs_version != $current_version ]] && \ - [[ $openebs_version != $upgrade_version ]]; then - echo "Expected version of $name_res in $resource is $current_version but got $openebs_version" - error_msg - exit 1; - fi - echo $openebs_version -} - -## get_csp_list will return the csp list related corresponding spc -function get_csp_list() { - local csp_list="" - local spc_name=$1 - csp_list=$(kubectl get csp \ - -l openebs.io/storage-pool-claim=$spc_name \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to get csp related to spc $spc_name" - error_msg - exit 1 - fi - echo $csp_list -} - -function verify_pod_image_tag() { - local pod_name=$1 - local container_name=$2 - local ns=$3 - - local image=$(kubectl get pod $pod_name -n $ns \ - -o jsonpath="{.spec.containers[?(@.name=='$container_name')].image}") - local image_tag=$(echo "$image" | cut -d ':' -f '2') - echo "$image_tag" -} diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/verify_pool_upgrade.sh b/k8s/upgrades/0.9.0-1.0.0/cstor/verify_pool_upgrade.sh deleted file mode 100755 index 18a4406c35..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/verify_pool_upgrade.sh +++ /dev/null @@ -1,133 +0,0 @@ -#!/usr/bin/env bash - -upgrade_version="1.0.0" -current_version="0.9.0" - - -## No need to catch kubectl command errors because we need to continue with -## other checks -source util.sh - -function error_msg() { - echo -n "Upgrade pool $spc is in pending or failed. Please make sure that the pool $spc " - echo -n "upgrade is successful before continuing for next step. " - echo -n "Contact OpenEBS team over slack for any further help." -} - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the SPC name using: kubectl get spc" - echo " namespace where pool pods" - echo " corresponding to SPC are deployed" - exit 1 -} - -if [ "$#" -ne 2 ]; then - usage -fi - -spc=$1 -ns=$2 -is_upgrade_failed=0 -echo "Verifying pool $spc upgrade..." - -csp_list=$(get_csp_list $spc) - -for csp_name in `echo $csp_list | tr ":" " "`; do - version=$(verify_openebs_version "csp" $csp_name $ns) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - fi - if [ $version != $upgrade_version ]; then - echo -n "CSP: $csp_name is not upgraded expected version: $upgrade_version " - echo "Got version: $version" - is_upgrade_failed=1 - fi -done - -for csp_name in `echo $csp_list | tr ":" " "`; do - pod_name=$(kubectl get pod -n $ns \ - -l openebs.io/storage-pool-claim=$spc,openebs.io/cstor-pool=$csp_name \ - -o jsonpath='{.items[0].metadata.name}') - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to get pool pod related to csp: $csp_name | Exit Code: $rc" - exit 1 - fi - - version=$(verify_openebs_version "pod" $pod_name $ns) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - fi - if [ $version != $upgrade_version ]; then - echo -n "pool pod: $pod_name is not upgraded expected version: $upgrade_version " - echo "Got version: $version" - is_upgrade_failed=1 - fi - - image_version=$(verify_pod_image_tag "$pod_name" "cstor-pool" "$ns") - if [ $image_version != $upgrade_version ]; then - echo -n "pool pod: $pod_name \"cstor-pool\" container image is not upgraded expected version: $upgrade_version " - echo "Got version: $image_version" - is_upgrade_failed=1 - fi - - image_version=$(verify_pod_image_tag "$pod_name" "cstor-pool-mgmt" "$ns") - if [ $image_version != $upgrade_version ]; then - echo -n "pool pod: $pod_name \"cstor-pool-mgmt\" image is not upgraded expected version: $upgrade_version " - echo "Got version: $image_version" - is_upgrade_failed=1 - fi - - image_version=$(verify_pod_image_tag "$pod_name" "maya-exporter" "$ns") - if [ $image_version != $upgrade_version ]; then - echo -n "pool pod: $pod_name \"maya-exporter\" image is not upgraded expected version: $upgrade_version " - echo "Got version: $image_version" - is_upgrade_failed=1 - fi - - bd_list=$(kubectl get csp $csp_name \ - -o jsonpath='{range .spec.group[*].blockDevice[*]}{.name}:{end}') - is_bd_present="false" - - for bd_name in `echo $bd_list | tr ":" " "`; do - claim_state=$(kubectl get bd $bd_name -n $ns \ - -o jsonpath='{.status.claimState}') - if [ "$claim_state" != "Claimed" ]; then - echo "blockdevice: $bd_name is not yet claimed" - is_upgrade_failed=1 - fi - is_bd_present="true" - done - if [ $is_bd_present == "false" ]; then - echo "blockdevice is not found in csp: $csp_name" - is_upgrade_failed=1 - fi -done - -sp_list=$(kubectl get sp -l openebs.io/storage-pool-claim=$spc \ - -o jsonpath='{range .items[*]}{@.metadata.name} {end}') - -sp_count=$(echo $sp_list | wc -w) - -if [ $sp_count != 0 ]; then - echo "SP is deprecated for cStor but still it is available in cluster. SP {$sp_list} list" - is_upgrade_failed=1 -fi - -if [ $is_upgrade_failed == 0 ]; then - echo "pool upgrade $spc verification is successful" -else - echo -n "Validation steps are failed on pool $spc. This might be" - echo "due to ongoing upgrade or errors during upgrade." - echo -n "Please run ./verify_pool_upgrade.sh again after " - echo "some time. If issue still persist, contact OpenEBS team over slack for any further help." - exit 1 -fi -exit 0 diff --git a/k8s/upgrades/0.9.0-1.0.0/cstor/verify_volume_upgrade.sh b/k8s/upgrades/0.9.0-1.0.0/cstor/verify_volume_upgrade.sh deleted file mode 100755 index ac0603772a..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/cstor/verify_volume_upgrade.sh +++ /dev/null @@ -1,135 +0,0 @@ -#!/usr/bin/env bash - -upgrade_version="1.0.0" -current_version="0.9.0" - -source util.sh - -function error_msg() { - echo -n "Upgrade volume $pv in $ns is in pending or failed. Please make sure that the volume $pv " - echo -n "upgrade is successful before continuing for next step. " - echo -n "Contact OpenEBS team over slack for any further help." -} - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - echo " Get the namespace where openebs" - echo " pods are installed" - exit 1 -} - -## Starting point -if [ "$#" -ne 2 ]; then - usage -fi - -pv=$1 -ns=$2 -is_upgrade_failed=0 -echo "Verifying volume $pv upgrade in namespace $ns..." - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found"; error_msg; exit 1; -fi - -# Check if CASType is cstor -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}"` -if [ $cas_type != "cstor" ]; then - echo "Cstor volume not found";exit 1; -elif [ $cas_type == "cstor" ]; then - echo "$pv is a cstor volume" -else - echo "Volume is neither cstor or cstor"; exit 1; -fi - -c_svc=$(kubectl get service -n $ns \ - -l openebs.io/persistent-volume=$pv,openebs.io/target-service=cstor-target-svc \ - -o jsonpath="{.items[*].metadata.name}") -version=$(verify_openebs_version "service" $c_svc $ns) - -if [ "$version" != $upgrade_version ]; then - echo -n "cstor target service: $c_svc is not upgraded expected version: $upgrade_version " - echo "Got version: $version" - is_upgrade_failed=1 -fi - -c_tgt_pod_name=$(kubectl get pod -n $ns \ - -l openebs.io/persistent-volume=$pv,openebs.io/target=cstor-target \ - -o jsonpath="{.items[*].metadata.name}") -version=$(verify_openebs_version "pod" $c_tgt_pod_name $ns) - -if [ "$version" != $upgrade_version ]; then - echo -n "cstor target pod: $c_tgt_pod_name is not upgraded expected version: $upgrade_version " - echo "Got version: $version" - is_upgrade_failed=1 -fi - -image_version=$(verify_pod_image_tag "$c_tgt_pod_name" "cstor-istgt" "$ns") -if [ "$image_version" != $upgrade_version ]; then - echo -n "cstor target pod: $c_tgt_pod_name \"cstor-istgt\" container image is not upgraded expected version: $upgrade_version " - echo "Got version: $image_version" - is_upgrade_failed=1 -fi - -image_version=$(verify_pod_image_tag "$c_tgt_pod_name" "cstor-volume-mgmt" "$ns") -if [ "$image_version" != $upgrade_version ]; then - echo -n "cstor target pod: $c_tgt_pod_name \"cstor-volume-mgmt\" container image is not upgraded expected version: $upgrade_version " - echo "Got version: $image_version" - is_upgrade_failed=1 -fi - -image_version=$(verify_pod_image_tag "$c_tgt_pod_name" "maya-volume-exporter" "$ns") -if [ "$image_version" != $upgrade_version ]; then - echo -n "cstor target pod: $c_tgt_pod_name \"maya-volume-exporter\" container image is not upgraded expected version: $upgrade_version " - echo "Got version: $image_version" - is_upgrade_failed=1 -fi - -## Get cstorvolume related to given pv -c_vol=$(kubectl get cstorvolumes \ - -l openebs.io/persistent-volume=$pv -n $ns \ - -o jsonpath="{.items[*].metadata.name}") -version=$(verify_openebs_version "cstorvolume" $c_vol $ns) - -## Verify version of cstorvolume related to given pv -if [ "$version" != $upgrade_version ]; then - echo -n "cstorvolume CR: $c_vol is not upgraded expected version: $upgrade_version " - echo "Got version: $version" - is_upgrade_failed=1 -fi - -## Get cstor volume replicas related to given pv -c_replicas=$(kubectl get cvr -n $ns \ - -l openebs.io/persistent-volume=$pv \ - -o jsonpath="{range .items[*]}{@.metadata.name};{end}" | tr ";" "\n") - -## Verify version of cstor volume replicas -for replica in $c_replicas; do - version=$(verify_openebs_version "cvr" $replica $ns) - - if [ "$version" != $upgrade_version ]; then - echo -n "cstorvolumereplica CR: $c_vol is not upgraded expected version: $upgrade_version " - echo "Got version: $version" - is_upgrade_failed=1 - fi -done - -if [ $is_upgrade_failed == 0 ]; then - echo "volume upgrade $pv verification is successful" -else - echo - echo - echo -n "Validation steps are failed on volume $pv in $ns. This might be" - echo "due to ongoing upgrade or errors during upgrade." - echo -n "Please run ./verify_volume_upgrade.sh again after " - echo "some time. If issue still persist, contact OpenEBS team over slack for any further help." - exit 1 -fi -exit 0 diff --git a/k8s/upgrades/0.9.0-1.0.0/deploy-patch.json b/k8s/upgrades/0.9.0-1.0.0/deploy-patch.json deleted file mode 100644 index c5eeb5c9f3..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/deploy-patch.json +++ /dev/null @@ -1,9 +0,0 @@ -{ - "spec": { - "selector": { - "matchLabels": { - "openebs.io/version": null - } - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-replica-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-replica-patch.tpl.json deleted file mode 100644 index 9f0f3adb7c..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-replica-patch.tpl.json +++ /dev/null @@ -1,49 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@", - "openebs.io/persistent-volume": "@pv_name@", - "openebs.io/replica": "jiva-replica" - } - }, - "spec": { - "selector": { - "matchLabels":{ - "openebs.io/persistent-volume": "@pv_name@", - "openebs.io/replica": "jiva-replica" - } - }, - "template": { - "metadata": { - "labels": { - "openebs.io/version": "@target_version@", - "openebs.io/persistent-volume": "@pv_name@", - "openebs.io/replica": "jiva-replica" - } - }, - "spec": { - "containers": [ - { - "name": "@r_name@", - "image": "quay.io/openebs/jiva:@target_version@" - } - ], - "affinity": { - "podAntiAffinity": { - "requiredDuringSchedulingIgnoredDuringExecution": [ - { - "labelSelector": { - "matchLabels": { - "openebs.io/persistent-volume": "@pv_name@", - "openebs.io/replica": "jiva-replica" - } - }, - "topologyKey": "kubernetes.io/hostname" - } - ] - } - } - } - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-target-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-target-patch.tpl.json deleted file mode 100644 index 78479a92fd..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-target-patch.tpl.json +++ /dev/null @@ -1,28 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "metadata": { - "labels":{ - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "containers": [ - { - "name": "@c_name@", - "image": "quay.io/openebs/jiva:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-target-svc-patch.tpl.json b/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-target-svc-patch.tpl.json deleted file mode 100644 index c39df1ba91..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva-target-svc-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva_volume_upgrade.sh b/k8s/upgrades/0.9.0-1.0.0/jiva/jiva_volume_upgrade.sh deleted file mode 100755 index 87fb0ba281..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/jiva/jiva_volume_upgrade.sh +++ /dev/null @@ -1,284 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - - -function on_exit() { - echo "Clearing temporary files" - rm jiva-replica-patch.json - rm jiva-target-patch.json - rm jiva-target-svc-patch.json -} -trap 'on_exit' EXIT - -target_upgrade_version="1.0.0" -current_version="0.9.0" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 - -# Check if pv exists -kubectl get pv "$pv" &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found" - exit 1 -fi - -# Check if CASType is jiva -cas_type=$(kubectl get pv "$pv" -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}") -if [ "$cas_type" != "jiva" ]; then - echo "Jiva volume not found" - exit 1 -else [ "$cas_type" == "jiva" ]; - echo "$pv is a jiva volume" -fi - -ns=$(kubectl get pv "$pv" -o jsonpath="{.spec.claimRef.namespace}") - -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc" # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -################################################################# - -c_deploy_name=$(kubectl get deploy -n "$ns" \ - -l openebs.io/persistent-volume="$pv",openebs.io/controller=jiva-controller \ - -o jsonpath="{.items[*].metadata.name}" \ - ) -r_deploy_name=$(kubectl get deploy -n "$ns" \ - -l openebs.io/persistent-volume="$pv",openebs.io/replica=jiva-replica \ - -o jsonpath="{.items[*].metadata.name}" \ - ) -c_svc_name=$(kubectl get svc -n "$ns" \ - -l openebs.io/persistent-volume="$pv" \ - -o jsonpath="{.items[*].metadata.name}" \ - ) -c_con_name=$(kubectl get deploy -n "$ns" "$c_deploy_name" \ - -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" \ - | grep "ctrl-con" \ - ) -r_con_name=$(kubectl get deploy -n "$ns" "$r_deploy_name" \ - -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" \ - | grep "rep-con" \ - ) - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs_old=$(kubectl get rs -o name --namespace "$ns" \ - -l openebs.io/persistent-volume="$pv",openebs.io/controller=jiva-controller \ - | cut -d '/' -f 2 \ - ) -r_rs_old_list=$(kubectl get rs -o name --namespace "$ns" \ - -l openebs.io/persistent-volume="$pv",openebs.io/replica=jiva-replica \ - -o jsonpath='{range .items[*]}{@.metadata.name}:{end}' \ - ) - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -# Check if openebs resources exist and provisioned version is 0.9.0 - -if [[ -z $c_rs_old ]]; then - echo "target Replica set not found" - exit 1 -fi - -for r_rs in $(echo "$r_rs_old_list" | tr ":" " "); do - if [[ -z $r_rs ]]; then - echo "Replica Replica set not found" - exit 1 - fi -done - -if [[ -z $c_deploy_name ]]; then - echo "target deployment not found" - exit 1 -fi - -if [[ -z $r_deploy_name ]]; then - echo "Replica deployment not found" - exit 1 -fi - -if [[ -z $c_svc_name ]]; then - echo "target service not found" - exit 1 -fi - -if [[ -z $r_con_name ]]; then - echo "Replica container not found" - exit 1 -fi - -if [[ -z $c_con_name ]]; then - echo "target container not found" - exit 1 -fi - -controller_version=$(kubectl get deployment "$c_deploy_name" -n "$ns" \ - -o jsonpath='{.metadata.labels.openebs\.io/version}') -if [[ "$controller_version" != "$current_version" ]] && \ - [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Current target deployment $c_deploy_name version is not $current_version or $target_upgrade_version" - exit 1 -fi -replica_version=$(kubectl get deployment "$r_deploy_name" -n "$ns" \ - -o jsonpath='{.metadata.labels.openebs\.io/version}') -if [[ "$replica_version" != "$current_version" ]] && \ - [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Current Replica deployment $r_deploy_name version is not $current_version or $target_upgrade_version" - exit 1 -fi - -controller_svc_version=$(kubectl get svc "$c_svc_name" -n "$ns" \ - -o jsonpath='{.metadata.labels.openebs\.io/version}') -if [[ "$controller_svc_version" != "$current_version" ]] && \ - [[ "$controller_svc_version" != "$target_upgrade_version" ]] ; then - echo "Current target service $c_svc_name version is not $current_version or $target_upgrade_version" - exit 1 -fi - -sed "s/@r_name@/$r_con_name/g" jiva-replica-patch.tpl.json \ - | sed "s/@target_version@/$target_upgrade_version/g" \ - | sed "s/@pv_name@/$pv/g" \ - > jiva-replica-patch.json -sed "s/@c_name@/$c_con_name/g" jiva-target-patch.tpl.json \ - | sed "s/@target_version@/$target_upgrade_version/g" \ - > jiva-target-patch.json -sed "s/@target_version@/$target_upgrade_version/g" jiva-target-svc-patch.tpl.json \ - > jiva-target-svc-patch.json - -#Fetch replica pod node names -before_node_names=$(kubectl get pods -n "$ns" \ - -l openebs.io/replica=jiva-replica,openebs.io/persistent-volume="$pv" \ - -o jsonpath='{range .items[*]}{@.spec.nodeName}:{end}') - -################################################################################# -# STEP: Patch OpenEBS volume deployments (jiva-target, jiva-replica & jiva-svc) # -################################################################################# - -# PATCH JIVA REPLICA DEPLOYMENT #### -if [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Replica Deployment to $target_upgrade_version" - - kubectl patch deployment --namespace "$ns" "$r_deploy_name" -p "$(cat jiva-replica-patch.json)" - rc=$?; - if [ $rc -ne 0 ]; then - echo "Failed to patch the deployment $r_deploy_name | Exit code: $rc"; - exit - fi - - for r_rs in $(echo "$r_rs_old_list" | tr ":" " "); do - kubectl delete rs "$r_rs" --namespace "$ns" - rc=$?; - if [ $rc -ne 0 ]; then - echo "Failed to delete replicaset $r_rs | Exit code: $rc"; - exit - fi - done - - rollout_status=$(kubectl rollout status --namespace "$ns" deployment/"$r_deploy_name") - rc=$?; - if [[ ($rc -ne 0) || ! ($rollout_status =~ "successfully rolled out") ]]; then - echo " RollOut for $r_deploy_name failed | Exit code: $rc" - exit - fi -else - echo "Replica Deployment $r_deploy_name is already at $target_upgrade_version" -fi - -# #### PATCH TARGET DEPLOYMENT #### -if [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Upgrading target Deployment to $target_upgrade_version" - - kubectl patch deployment --namespace "$ns" "$c_deploy_name" \ - -p "$(cat jiva-target-patch.json)" - rc=$?; - if [ $rc -ne 0 ]; then - echo "Failed to patch deployment $c_deploy_name | Exit code: $rc" - exit - fi - - kubectl delete rs "$c_rs_old" --namespace "$ns" - rc=$?; - if [ $rc -ne 0 ]; then - echo "Failed to deleted replicaset $c_rs_old | Exit code: $rc" - exit - fi - - rollout_status=$(kubectl rollout status --namespace "$ns" deployment/"$c_deploy_name") - rc=$?; - if [[ ($rc -ne 0) || ! ($rollout_status =~ "successfully rolled out") ]]; then - echo " Failed to patch the deployment | Exit code: $rc" - exit - fi -else - echo "controller Deployment $c_deploy_name is already at $target_upgrade_version" -fi - -# #### PATCH TARGET SERVICE #### -if [[ "$controller_svc_version" != "$target_upgrade_version" ]]; then - echo "Upgrading target service to $target_upgrade_version" - kubectl patch service --namespace "$ns" "$c_svc_name" \ - -p "$(cat jiva-target-svc-patch.json)" - rc=$?; - if [ $rc -ne 0 ]; then - echo "Failed to patch the service $c_svc_name | Exit code: $rc" - exit - fi -else - echo "controller service $c_svc_name is already at $target_upgrade_version" -fi - -#Checking node stickiness -after_node_names=$(kubectl get pods -n "$ns" \ - -l openebs.io/replica=jiva-replica,openebs.io/persistent-volume="$pv" \ - -o jsonpath='{range .items[*]}{@.spec.nodeName}:{end}') - -for after_node in $(echo "$after_node_names" | tr ":" " "); do - count=0 - for before_node in $(echo "$before_node_names" | tr ":" " "); do - if [ "$after_node" == "$before_node" ]; then - count=$(( count+1 )) - fi - done - if [ $count != 1 ]; then - echo "Node stickiness failed after upgrade" - exit 1 - fi -done - -echo "Upgrade steps are done on volume $pv" - -./verify_volume_upgrade.sh "$pv" - -rc=$? -if [ $rc -eq 0 ]; then - echo "Verification of volume $pv upgrade is successful. Please run your application checks" -fi diff --git a/k8s/upgrades/0.9.0-1.0.0/jiva/verify_volume_upgrade.sh b/k8s/upgrades/0.9.0-1.0.0/jiva/verify_volume_upgrade.sh deleted file mode 100755 index 58176b86b5..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/jiva/verify_volume_upgrade.sh +++ /dev/null @@ -1,146 +0,0 @@ -#!/usr/bin/env bash - -target_upgrade_version=1.0.0 -is_upgrade_failed=0 - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 - -echo "verifying jiva $pv volume upgrade" - -ns=$(kubectl get pv "$pv" -o jsonpath="{.spec.claimRef.namespace}") - - -c_deploy_name=$(kubectl get deploy -n "$ns" \ - -l openebs.io/persistent-volume="$pv",openebs.io/controller=jiva-controller \ - -o jsonpath="{.items[*].metadata.name}" \ -) - -r_deploy_name=$(kubectl get deploy -n "$ns" \ - -l openebs.io/persistent-volume="$pv",openebs.io/replica=jiva-replica \ - -o jsonpath="{.items[*].metadata.name}" \ -) -c_svc_name=$(kubectl get svc -n "$ns" \ - -l openebs.io/persistent-volume="$pv" \ - -o jsonpath="{.items[*].metadata.name}" \ -) - -#Fetch REPLICATIONFACTOR from deployment -container_name=$(echo "$pv-ctrl-con") -replication_factor=$(kubectl get deploy "$c_deploy_name" -n "$ns" \ --o jsonpath="{.spec.template.spec.containers[?(@.name=='$container_name')].env[?(@.name=='REPLICATION_FACTOR')].value}") - -#Verifying the upgraded deployment versions -controller_version=$(kubectl get deployment "$c_deploy_name" -n "$ns" \ - -o jsonpath='{.metadata.labels.openebs\.io/version}') -if [ "$controller_version" != "$target_upgrade_version" ]; then - echo "Failed validation for target deployment $c_deploy_name... " - echo "expected version: $target_upgrade_version but got $controller_version" - is_upgrade_failed=1 -fi -replica_version=$(kubectl get deployment "$r_deploy_name" -n "$ns" \ - -o jsonpath='{.metadata.labels.openebs\.io/version}') -if [ "$replica_version" != "$target_upgrade_version" ]; then - echo "Failed validation for replica deployment $r_deploy_name... " - echo "expected version: $target_upgrade_version but got $replica_version" - is_upgrade_failed=1 -fi - -#Verifying the upgraded service versions -controller_svc_version=$(kubectl get svc "$c_svc_name" -n "$ns" \ - -o jsonpath='{.metadata.labels.openebs\.io/version}') -if [ "$controller_svc_version" != "$target_upgrade_version" ] ; then - echo "Failed validation for target service $c_svc_name... " - echo "expected version: $target_upgrade_version but got $controller_svc_version" - is_upgrade_failed=1 -fi - -#Verifying the upgraded deployment images -controller_images=$(kubectl get deployment "$c_deploy_name" -n "$ns" \ - -o jsonpath='{range .spec.template.spec.containers[*]}{@.image}?{end}') -for image in $(echo "$controller_images" | tr "?" " "); do - image_version=$(echo "$image" | cut -d ':' -f 2) - if [ "$image_version" != "$target_upgrade_version" ] ; then - echo "Failed validation for controller deployment $c_deploy_name..." - echo "expected image version: $target_upgrade_version but got $image_version" - is_upgrade_failed=1 - fi -done - -replica_images=$(kubectl get deployment "$r_deploy_name" -n "$ns" \ - -o jsonpath='{range .spec.template.spec.containers[*]}{@.image}?{end}') -for image in $(echo "$replica_images" | tr "?" " "); do - image_version=$(echo "$image" | cut -d ':' -f 2) - if [ "$image_version" != "$target_upgrade_version" ] ; then - echo "Failed validation for replica deployment $r_deploy_name..." - echo "expected image version: $target_upgrade_version but got $image_version" - is_upgrade_failed=1 - fi -done - -#Verifying running status of controller and replica pods -running_ctrl_pod_count=$(kubectl get pods -n "$ns" \ --l openebs.io/controller=jiva-controller,openebs.io/persistent-volume="$pv" \ ---no-headers | wc -l | tr -d [:blank:]) -if [ "$running_ctrl_pod_count" != 1 ]; then - echo "Failed validation for controller pod not running" - is_upgrade_failed=1 -fi - -running_rep_pod_count=$(kubectl get pods -n "$ns" \ --l openebs.io/replica=jiva-replica,openebs.io/persistent-volume="$pv" \ ---no-headers | wc -l | tr -d [:blank:]) -if [ "$running_rep_pod_count" != "$replication_factor" ]; then - echo "Failed validation for replica pods not running" - is_upgrade_failed=1 -fi - -#Verifying registered replica count -retry=0 -replica_count=0 -while [[ "$replica_count" != "$replication_factor" && $retry -lt 60 ]] -do - ctr_pod=$(kubectl get pod -n "$ns" \ - -l openebs.io/persistent-volume="$pv",openebs.io/controller=jiva-controller \ - -o jsonpath="{.items[*].metadata.name}" \ - ) - - replica_count=$(kubectl exec -it "$ctr_pod" -n "$ns" --container "$container_name" \ - -- bash -c "curl -s http://localhost:9501/v1/volumes" \ - | grep -oE '("replicaCount":)[0-9]' | cut -d ':' -f 2 - ) - - retry=$(( retry+1 )) - sleep 5 -done - -if [ "$replica_count" != "$replication_factor" ]; then - echo "Failed validation for registered replica count.. " - echo "expected $replication_factor but only $replica_count are registered" - is_upgrade_failed=1 -fi - -if [ $is_upgrade_failed == 0 ]; then - echo "volume upgrade $pv verification is successful" -else - echo -n "Validation steps are failed on volume $pv. This might be" - echo "due to ongoing upgrade or errors during upgrade." - echo -n "Please run ./verify_volume_upgrade.sh again after " - echo "some time. If issue still persist, contact OpenEBS team over slack for any further help." - exit 1 -fi - -exit 0 diff --git a/k8s/upgrades/0.9.0-1.0.0/label_patch.sh b/k8s/upgrades/0.9.0-1.0.0/label_patch.sh deleted file mode 100755 index 8803b536f4..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/label_patch.sh +++ /dev/null @@ -1,110 +0,0 @@ -#!/usr/bin/env bash -## Below snippet will remove the openebs.io/version label from -## deployment.spec.selector.matchLabels - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo "Get the namespace where openebs setup is running." - exit 1 -} - -## is_patch_continue returns true if patch is required else false -function is_patch_continue() { - local deploy_name=$1 - local res_name=$2 - local ns=$3 - local selector_version=$(kubectl get $res_name $deploy_name -n $ns \ - -o jsonpath='{.spec.selector.matchLabels.openebs\.io/version}') - rc=$?; if [ $rc -ne 0 ]; then - echo "Failed to get selector version from $deploy_name deployment name | Exit code: $rc" - exit 1 - fi - if [ -z "$selector_version" ]; then - echo "false" - else - echo "true" - fi -} - -if [ "$#" -ne 1 ]; then - usage -fi - -ns=$1 - -## Remove openebs.io/version from maya-apiserver -## Get maya-apiserver deployment name -maya_deploy_name=$(kubectl get deploy \ - -l name=maya-apiserver -n "$ns"\ - -o jsonpath='{.items[0].metadata.name}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get maya apiserver deployment \ -name | Exit code: $rc"; exit 1; fi - -is_continue=$(is_patch_continue "$maya_deploy_name" "deployment" "$ns") -if [ "$is_continue" == "true" ]; then - kubectl patch deploy "$maya_deploy_name" -p "$(cat deploy-patch.json)" -n "$ns" - rc=$?; if [ $rc -ne 0 ]; then echo -n "Failed to patch deployment $maya_deploy_name | Exit code: $rc"; exit 1; fi -fi - -### admission-server has label selector in deployment file so no need to patch - -## Remove openebs.io/version from openebs-provisioner -## Get openebs-provisioner deployment name -provisioner_deploy_name=$(kubectl get deploy \ - -l name=openebs-provisioner -n "$ns"\ - -o jsonpath='{.items[0].metadata.name}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get provisioner deployment name | Exit code: $rc"; exit 1; fi - -is_continue=$(is_patch_continue "$provisioner_deploy_name" "deployment" "$ns") -if [ "$is_continue" == "true" ]; then - kubectl patch deploy "$provisioner_deploy_name" \ - -p "$(cat deploy-patch.json)" -n "$ns" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $provisioner_deploy_name | Exit code: $rc"; exit 1; fi -fi - -## Remove openebs.io/version from snapshot-provisioner -## Get snapshot-provisioner deployment name -snapshot_deploy_name=$(kubectl get deploy \ - -l name=openebs-snapshot-operator -n "$ns"\ - -o jsonpath='{.items[0].metadata.name}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get snapshot deployment name \ -| Exit code: $rc"; exit 1; fi - -is_continue=$(is_patch_continue "$snapshot_deploy_name" "deployment" "$ns") -if [ "$is_continue" == "true" ]; then - kubectl patch deploy "$snapshot_deploy_name" -p "$(cat deploy-patch.json)" -n "$ns" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $snapshot_deploy_name | Exit code: $rc"; exit 1; fi -fi - -## Remove openebs.io/version from local-pvprovisioner -## Get localpv-provisioner deployment name -localpv_provisioner_deploy_name=$(kubectl get deploy \ - -l name=openebs-localpv-provisioner -n "$ns"\ - -o jsonpath='{.items[0].metadata.name}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get localpv provisioner \ -deployment name | Exit code: $rc"; exit 1; fi - -is_continue=$(is_patch_continue "$localpv_provisioner_deploy_name" "deployment" "$ns") -if [ "$is_continue" == "true" ]; then - kubectl patch deploy "$localpv_provisioner_deploy_name" \ - -p "$(cat deploy-patch.json)" -n "$ns" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $local_pvprovisioner_deploy_name | Exit code: $rc"; exit 1; fi -fi - -daemonset_name=$(kubectl get daemonset \ - -l name=openebs-ndm,openebs.io/component-name=ndm -n "$ns" \ - -o jsonpath='{.items[0].metadata.name}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get ndm daemonset name \ -| Exit code: $rc"; exit 1; fi - -is_continue=$(is_patch_continue "$daemonset_name" "daemonset" "$ns") -if [ "$is_continue" == "true" ]; then - kubectl patch daemonset "$daemonset_name" -p "$(cat deploy-patch.json)" -n "$ns" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch daemonset $daemonset_name | Exit code: $rc"; exit 1; fi -fi -echo "Successfully removed label selectors from openebs deployments" -exit 0 diff --git a/k8s/upgrades/0.9.0-1.0.0/patch-remove-filesystem.json b/k8s/upgrades/0.9.0-1.0.0/patch-remove-filesystem.json deleted file mode 100644 index c821fde7fb..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/patch-remove-filesystem.json +++ /dev/null @@ -1,3 +0,0 @@ -[ - {"op": "remove", "path": "/spec/fileSystem"} -] \ No newline at end of file diff --git a/k8s/upgrades/0.9.0-1.0.0/patch-remove-partition.json b/k8s/upgrades/0.9.0-1.0.0/patch-remove-partition.json deleted file mode 100644 index 1c3237ea10..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/patch-remove-partition.json +++ /dev/null @@ -1,3 +0,0 @@ -[ - {"op": "remove", "path": "/spec/partitionDetails"} -] \ No newline at end of file diff --git a/k8s/upgrades/0.9.0-1.0.0/pre-upgrade.sh b/k8s/upgrades/0.9.0-1.0.0/pre-upgrade.sh deleted file mode 100755 index 43c9a41883..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/pre-upgrade.sh +++ /dev/null @@ -1,333 +0,0 @@ -#!/usr/bin/env bash - -echo "---------pre-upgrade logs----------" > log.txt - -############################################################################### -# STEP 1: Get all block devices present on the cluster and corresponding disk # -# STEP 2: Create block device claim to claim corresponding block device # -# STEP 3: Patch SPC to stop reconciliation # -# # -############################################################################### -updated_version="1.0.0" -current_version="0.9.0" - -function error_msg() { - echo -n "Pre-upgrade script failed. Please make sure pre-upgrade script is " - echo -n "successful before continuing for next step. Contact OpenEBS team over slack for any further help." -} - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Namespace in which openebs control plane components are installed" - echo " installation_mode would be \"helm\" if OpenEBS" - echo " is installed using \"helm\" charts (or) \"operator\" if OpenEBS is installed using \"operator yaml\"" - exit 1 -} - -function patch_disk() { - disk=$1 - currentFS=$(kubectl get disk $disk -o jsonpath="{.spec.fileSystem}") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get FS details of $disk : $rc"; exit 1; fi - currentPartition=$(kubectl get disk $disk -o jsonpath="{.spec.partitionDetails}") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get Partition details of $disk : $rc"; exit 1; fi - - - # if current filesystem is not nil, patch and remove the field - if [ ! -z "$currentFS" ]; then - kubectl patch disk --type json ${disk} -p "$(cat patch-remove-filesystem.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERRORFS: $disk : $rc"; exit 1; fi - echo "FS of ${disk} patched" - fi - - # if current partition struct is not nil, patch and remove the field - if [ ! -z "$currentPartition" ]; then - kubectl patch disk --type json ${disk} -p "$(cat patch-remove-partition.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERRORPT: $disk : $rc"; exit 1; fi - echo "Partition of ${disk} patched" - fi -} - -function is_annotation_patch_continue() { - local spc_name=$1 - local reconcile_value=$(kubectl get spc $spc_name \ - -o jsonpath='{.metadata.annotations.reconcile\.openebs\.io/disable}') - if [ -z "$reconcile_value" ]; then - echo "true" - else - echo "false" - fi -} - -## get_csp_list accepts spc_name as a argument and returns csp list -## corresponding to csp -function get_csp_list() { - local csp_list="" - local spc_name=$1 - - csp_list=$(kubectl get csp \ - -l openebs.io/storage-pool-claim=$spc_name \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to get csp related to spc $spc" - error_msg - exit 1 - fi - echo $csp_list -} - -## create_bdc_claim_bd accepts spc and disk names as a argument and create block -## device claims to claim corresponding block device -function create_bdc_claim_bd() { - local spc_name=$1 - local disk_name=$2 - local bd_name=$(echo $disk_name | sed 's|disk|blockdevice|') - - ## Below command will get the output as below format - ## nodename:bdc-123454321 - local bd_details=$(kubectl get disk $disk_name \ - -o jsonpath='{.metadata.labels.kubernetes\.io/hostname}:bdc-{.metadata.uid}') - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get disk: $disk_name details | Exit Code: $rc"; error_msg; exit 1; fi - - local node_name=$(echo $bd_details | cut -d ":" -f 1) - local bdc_name=$(echo $bd_details | cut -d ":" -f 2) - - local spc_uid=$(kubectl get spc $spc_name -o jsonpath='{.metadata.uid}') - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get spc: $spc_name UID | Exit Code: $rc"; error_msg; exit 1; fi - - sed "s|@spc_name@|$spc_name|g" bdc-create.tpl.json | \ - sed "s|@bdc_name@|$bdc_name|g" | \ - sed "s|@bdc_namespace@|$ns|g" | \ - sed "s|@spc_uid@|$spc_uid|g" | \ - sed "s|@bd_name@|$bd_name|g" | \ - sed "s|@node_name@|$node_name|g" > bdc-create.json - - ## Create block device claim - kubectl apply -f bdc-create.json - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to create bdc: $bdc_name in namespace $ns | Exit Code: $rc" - error_msg - rm bdc-create.json - exit 1 - fi - - ## cleanup temporary file - rm bdc-create.json -} - -## Output of below command -## kubectl exec cstor-sparse-d9r7-66cd7b798c-4qjnt -n openebs -c cstor-pool -- zpool list -v -H -P | awk '{print $1}' -## cstor-49a012ee-8f1a-11e9-8773-54e1ad4a9dd4 -## mirror -## /var/openebs/sparse/3-ndm-sparse.img -## /var/openebs/sparse/0-ndm-sparse.img -## from the above output extracting disk names -function get_underlying_disks() { - local pod_name=$1 - local pool_type=$2 - local zpool_disk_list=$(kubectl exec $pod_name -n $ns -c cstor-pool \ - -- zpool list -v -H -P | \ - awk '{print $1}' | grep -v cstor | grep -v ${map_pool_type[$pool_type]}) - echo $zpool_disk_list -} - -## claim_blockdevices_csp accepts spc name and csp list as a parameters -function claim_blockdevices_csp() { - local spc_name=$1 - local csp_list=$2 - local sp_name="" - local pool_pod_name="" - local found=0 - local csp_disk_len=0 - local sp_disk_len=0 - local csp_disk_list="" - local zpool_disk_list="" - local sp_disk_list="" - for csp_name in `echo $csp_list | tr ":" " "`; do - echo "-----------------CSP $csp_name----------------" >> log.txt - kubectl get csp $csp_name -o yaml >> log.txt - - local csp_version=$(kubectl get csp $csp_name \ - -o jsonpath="{.metadata.labels.openebs\.io/version}") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get csp: $csp_name version | Exit Code: $rc"; error_msg; exit 1; fi - if [ $csp_version != $updated_version ] && [ $csp_version != $current_version ]; then - echo -n "csp $csp_name is not in current version $current_version or " - echo "updated version $updated_version" - exit 1 - fi - - if [ $csp_version == $updated_version ]; then - continue - fi - pool_pod_name=$(kubectl get pod -n $ns \ - -l app=cstor-pool,openebs.io/cstor-pool=$csp_name,openebs.io/storage-pool-claim=$spc_name \ - -o jsonpath="{.items[0].metadata.name}") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get pool pod name for csp: $csp_name | Exit Code: $rc"; error_msg; exit 1; fi - - pool_type=$(kubectl get csp $csp_name \ - -o jsonpath='{.spec.poolSpec.poolType}') - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get pool type for csp: $csp_name | Exit Code: $rc"; error_msg; exit 1; fi - - - csp_disk_list=$(kubectl get csp $csp_name \ - -o jsonpath='{.spec.disks.diskList}' | \ - tr "[]" " ") - csp_disk_len=$(echo $csp_disk_list | wc -w) - - zpool_disk_list=$(get_underlying_disks $pool_pod_name $pool_type) - - if [ -z "$zpool_disk_list" ]; then - echo "zpool disk list is empty" - error_msg - exit 1 - fi - - if [ $csp_disk_len == 0 ]; then - echo "csp disk list is empty" - error_msg - exit 1 - fi - - ## In some platforms we are getting some suffix to the zpool_disk_list - for zpool_disk in $zpool_disk_list; do - found=0 - for csp_disk in $csp_disk_list; do - if [[ "$zpool_disk" == "$csp_disk"* ]]; then - found=1 - break - fi - done - if [ $found == 0 ]; then - echo "zpool disk: $zpool_disk is not found in csp: $csp_name disk list: {$csp_disk_list}" - error_msg - exit 1 - fi - done - - sp_name=$(kubectl get sp \ - -l openebs.io/cstor-pool=$csp_name \ - -o jsonpath="{.items[*].metadata.name}") - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get sp name related to csp: $csp_name | Exit Code: $rc"; error_msg; exit 1; fi - echo "- - - - - - SP $sp_name- - - - - -" >> log.txt - kubectl get sp $sp_name -o yaml >> log.txt - echo "- - - - - - - - - - - - - - - - - " >> log.txt - - ## kubectl command get output in below format - ## [sparse-37a7de580322f43a sparse-5a92ced3e2ee21 sparse-5e508018b4dd2c8] - ## and then converts to below format - ## sparse-37a7de580322f43a sparse-5a92ced3e2ee21 sparse-5e508018b4dd2c8 - sp_disk_list=$(kubectl get sp $sp_name \ - -o jsonpath="{.spec.disks.diskList}" | tr "[]" " ") - - sp_disk_len=$(echo $sp_disk_list | wc -w) - - if [ $sp_disk_len -ne $csp_disk_len ]; then - echo "Length of csp disk list $csp_disk_len and sp disk list $sp_disk_len is not matched" - error_msg - exit 1 - fi - - - for disk_name in $sp_disk_list; do - echo "######disk $disk_name#######" >> log.txt - kubectl get disk $disk_name -o yaml >> log.txt - echo "############################" >> log.txt - create_bdc_claim_bd $spc_name $disk_name - done - echo "---------------------------------------------" >> log.txt - done -} - - -## Starting point -if [ "$#" -ne 2 ]; then - usage -fi -ns=$1 -install_option=$2 - -if [ "$install_option" != "operator" ] && [ "$install_option" != "helm" ]; then - echo "Second argument must be either \"operator\" or \"helm\"" - exit 1 -fi - -declare -A map_pool_type -map_pool_type["mirrored"]="mirror" -map_pool_type["striped"]="striped" -map_pool_type["raidz"]="raidz" -map_pool_type["raidz2"]="raidz2" - - -## Apply blockdeviceclaim crd yaml to create CR -kubectl apply -f blockdeviceclaim_crd.yaml -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to create blockdevice crd | Exit Code: $rc"; error_msg; exit 1; fi - -kubectl get nodes -n $ns --show-labels >> log.txt -echo >> log.txt -echo >> log.txt - -kubectl get pods -n $ns --show-labels >> log.txt -echo >> log.txt -echo >> log.txt - -### Get the spc list which are present in the cluster ### -spc_list=$(kubectl get spc -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to list spc in cluster | Exit Code: $rc"; error_msg; exit 1; fi - -#### Get required info from current spc and use the info to claim block device #### -for spc_name in `echo $spc_list | tr ":" " "`; do - echo "========================SPC $spc_name==========================" >> log.txt - kubectl get spc $spc_name -o yaml >> log.txt - csp_list=$(get_csp_list $spc_name) - claim_blockdevices_csp $spc_name $csp_list - echo "==============================================================" >> log.txt - - is_patch=$(is_annotation_patch_continue $spc_name) - if [ $is_patch == "true" ]; then - ## Patching the spc resource with label - kubectl patch spc $spc_name -p "$(cat stop-reconcile-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch spc: $spc_name with reconcile annotation | Exit Code: $rc"; error_msg; exit 1; fi - fi -done - -ds_name=$(kubectl get pod -n $ns -l openebs.io/component-name=ndm \ - -o jsonpath='{.items[0].metadata.ownerReferences[?(@.kind=="DaemonSet")].name}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get ndm daemonset name in namespace: $ns | Exit Code: $rc"; error_msg; exit 1; fi - -desired_count=$(kubectl get daemonset $ds_name -n $ns \ - -o jsonpath='{.status.desiredNumberScheduled}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get desired scheduled pod count from $ds_name in namespace: $ns | Exit Code: $rc"; error_msg; exit 1; fi - -current_count=$(kubectl get daemonset $ds_name -n $ns \ - -o jsonpath='{.status.currentNumberScheduled}') -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get current scheduled pod count from $ds_name in namespace: $ns | Exit Code: $rc"; error_msg; exit 1; fi - -if [ $desired_count != $current_count ]; then - echo "Daemonset desired pod count: $desired_count is not matched with current pod count: $current_count" - error_msg - exit 1 -fi - -disk_list=$(kubectl get disks -o jsonpath="{range .items[*]}{.metadata.name}:{end}") -rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get disk list : $rc"; exit 1; fi - -for disk_name in `echo $disk_list | tr ":" " "`; do - patch_disk $disk_name -done - -if [ $install_option == "operator" ]; then - ./label_patch.sh $ns - rc=$? - if [ $rc -ne 0 ]; then - echo "Failed to patch control plane deployments" - error_msg - exit 1 - fi -fi - -echo "Pre-Upgrade is successful Please update openebs components" diff --git a/k8s/upgrades/0.9.0-1.0.0/stop-reconcile-patch.json b/k8s/upgrades/0.9.0-1.0.0/stop-reconcile-patch.json deleted file mode 100644 index 8f88f86400..0000000000 --- a/k8s/upgrades/0.9.0-1.0.0/stop-reconcile-patch.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "annotations": { - "reconcile.openebs.io/disable": "true" - } - } -} diff --git a/k8s/upgrades/1.0.0-1.1.0/README.md b/k8s/upgrades/1.0.0-1.1.0/README.md deleted file mode 100644 index 516dc46f29..0000000000 --- a/k8s/upgrades/1.0.0-1.1.0/README.md +++ /dev/null @@ -1,346 +0,0 @@ -# UPGRADE FROM OPENEBS 1.0.0 TO 1.1.0 - -## Overview - -This document describes the steps for upgrading OpenEBS from 1.0.0 to 1.1.0 - -The upgrade of OpenEBS is a three step process: -- *Step 1* - Prerequisites -- *Step 2* - Upgrade the OpenEBS Control Plane Components -- *Step 3* - Upgrade the OpenEBS Data Plane Components from previous version (1.0.0) - -### Terminology - -- *OpenEBS Control Plane: Refers to maya-apiserver, openebs-provisioner, etc along w/ respective RBAC components* -- *OpenEBS Data Plane: Refers to Storage Engine pods like cStor, Jiva controller(aka target) & replica pods* - - -## Step 1: Prerequisites - -**Note: It is mandatory to make sure to that all OpenEBS control plane -and data plane components are running with version 1.0.0 before the upgrade.** - -**Note: All steps described in this document need to be performed from a -machine that has access to Kubernetes master** - -- Note down the `namespace` where openebs components are installed. - The following document assumes that namespace to be `openebs`. - -- Note down the `openebs service account`. - The following command will help you to determine the service account name. - ```sh - $ kubectl get deploy -n openebs -l name=maya-apiserver -o jsonpath="{.items[*].spec.template.spec.serviceAccount}" - ``` - The examples in this document assume the service account name is `openebs-maya-operator`. - -- Verify that OpenEBS Control plane is indeed in 1.0.0 version - ```sh - $ kubectl get pods -n openebs -l openebs.io/version=1.0.0 - ``` - - The output will list the control plane services mentioned below, as well as some - of the data plane components. - ```sh - NAME READY STATUS RESTARTS AGE - maya-apiserver-7b65b8b74f-r7xvv 1/1 Running 0 2m8s - openebs-admission-server-588b754887-l5krp 1/1 Running 0 2m7s - openebs-localpv-provisioner-77b965466c-wpfgs 1/1 Running 0 85s - openebs-ndm-5mzg9 1/1 Running 0 103s - openebs-ndm-bmjxx 1/1 Running 0 107s - openebs-ndm-operator-5ffdf76bfd-ldxvk 1/1 Running 0 115s - openebs-ndm-v7vd8 1/1 Running 0 114s - openebs-provisioner-678c549559-gh6gm 1/1 Running 0 2m8s - openebs-snapshot-operator-75dc998946-xdskl 2/2 Running 0 2m6s - ``` - - Verify that `apiserver` is listed. If you have installed with helm charts, - the apiserver name may be openebs-apiserver. - -## Step 2: Upgrade the OpenEBS Control Plane - -Upgrade steps vary depending on the way OpenEBS was installed by you. -Below are steps to upgrade using some common ways to install OpenEBS: - -### Upgrade using kubectl (using openebs-operator.yaml): - -**Use this mode of upgrade only if OpenEBS was installed using openebs-operator.yaml.** - -**The sample steps below will work if you have installed OpenEBS without -modifying the default values in openebs-operator.yaml. If you have customized -the openebs-operator.yaml for your cluster, you will have to download the -1.1.0 openebs-operator.yaml and customize it again** - -``` -#Upgrade to OpenEBS control plane components to version 1.1.0 -$ kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.1.0.yaml -``` - -### Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,): - -**The sample steps below will work if you have installed openebs with -default values provided by stable/openebs helm chart.** - -Before upgrading via helm, please review the default values available with -latest stable/openebs chart. -(https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to - update OpenEBS. [More](https://hub.helm.sh/charts/stable/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values stable/openebs --version 1.1.0 - ``` -- If not, customize the values into your copy (say custom-values.yaml), - by copying the content from above default yamls and edit the values to - suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade stable/openebs --version 1.1.0 -f custom-values.yaml` - ``` - -### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 1.1.0 -release tags and changes made in the values/templates. After updating the YAML -or helm chart or helm chart values, you can use the above procedures to upgrade -the OpenEBS Control Plane components. - -## Step 3: Upgrade the OpenEBS Pools and Volumes - - -**Note: Upgrade functionality is still under active development. -It is highly recommended to schedule a downtime for the application using the -OpenEBS PV while performing this upgrade. Also, make sure you have taken a -backup of the data before starting the below upgrade procedure.** - -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/troubleshooting.html#recovery-readonly-when-kubelet-is-container - -- automatic rollback option is not provided. To rollback, you need to update - the controller, exporter and replica pod images to the previous version - -**Note: Before proceeding with the upgrade of the OpenEBS Data Plane components -like cStor or Jiva, verify that OpenEBS Control plane is indeed in 1.1.0 version** - - You can use the following command to verify: - ```sh - $ kubectl get pods -n openebs -l openebs.io/version=1.1.0 - ``` - - The above command should show that the control plane components are upgrade. - The output should look like below: - ```sh - NAME READY STATUS RESTARTS AGE - maya-apiserver-7b65b8b74f-r7xvv 1/1 Running 0 2m8s - openebs-admission-server-588b754887-l5krp 1/1 Running 0 2m7s - openebs-localpv-provisioner-77b965466c-wpfgs 1/1 Running 0 85s - openebs-ndm-5mzg9 1/1 Running 0 103s - openebs-ndm-bmjxx 1/1 Running 0 107s - openebs-ndm-operator-5ffdf76bfd-ldxvk 1/1 Running 0 115s - openebs-ndm-v7vd8 1/1 Running 0 114s - openebs-provisioner-678c549559-gh6gm 1/1 Running 0 2m8s - openebs-snapshot-operator-75dc998946-xdskl 2/2 Running 0 2m6s - ``` - -**Note: If you have any queries or see something unexpected, please reach out to the -OpenEBS maintainers via [Github Issue](https://github.com/openebs/openebs/issues) or via [OpenEBS Slack](https://slack.openebs.io).** - -As you might have seen by now, control plane components and data plane components -work independently. Even after the OpenEBS Control Plane components have been -upgraded to 1.1.0, the Storage Pools and Volumes (both jiva and cStor) -will continue to work with older versions. - -You can use the below steps for upgrading cstor and jiva components. - -Starting with 1.1.0, the upgrade steps have been changed to eliminate the -need for downloading scripts. You can use `kubectl` to trigger an upgrade job -using Kubernetes Job spec. The following instructions provide details on how -to create your Upgrade Job specs. - -### Upgrade the OpenEBS Jiva PV - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-713e3bb6-afd2-11e9-8e79-42010a800065 5G RWO Delete Bound default/bb-jd-claim openebs-jiva-default 46m -``` - -Create a Kubernetes Job spec for upgrading the jiva volume. An example spec is as follows: -``` -#This is an example YAML for upgrading jiva volume. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: jiva-vol-- - name: jiva-vol-100110-pvc-713e3bb6-afd2-11e9-8e79-42010a800065 - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs -spec: - backoffLimit: 4 - template: - spec: - #VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "jiva-volume" - - "--from-version=1.0.0" - - "--to-version=1.1.0" - #VERIFY that you have provided the correct cStor PV Name - - "--pv-name=pvc-713e3bb6-afd2-11e9-8e79-42010a800065" - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - image: quay.io/openebs/m-upgrade:1.1.0 - restartPolicy: OnFailure ---- -``` - -Execute the Upgrade Job Spec -``` -$ kubectl apply -f jiva-vol-100110-pvc713.yaml -``` - -You can check the status of the Job using commands like: -``` -$ kubectl get job -n openebs -$ kubectl get pods -n openebs #to check on the name for the job pod -$ kubectl logs -n openebs jiva-upg-100111-pvc-713e3bb6-afd2-11e9-8e79-42010a800065-bgrhx -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -```sh -NAME AGE -cstor-sparse-pool 24m -``` - -The Job spec for upgrade cstor pools is: - -```sh -#This is an example YAML for upgrading cstor SPC. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: cstor-spc-- - name: cstor-spc-100110-cstor-sparse-pool - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs -spec: - backoffLimit: 4 - template: - spec: - #VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "cstor-spc" - - "--from-version=1.0.0" - - "--to-version=1.1.0" - #VERIFY that you have provided the correct SPC Name - - "--spc-name=cstor-sparse-pool" - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - image: quay.io/openebs/m-upgrade:1.1.0 - restartPolicy: OnFailure ---- -``` - - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -```sh -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -``` - -Create a Kubernetes Job spec for upgrading the cstor volume. An example spec is as follows: -``` -#This is an example YAML for upgrading cstor volume. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: cstor-vol-- - name: cstor-vol-100110-pvc-c630f6d5-afd2-11e9-8e79-42010a800065 - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs -spec: - backoffLimit: 4 - template: - spec: - #VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "cstor-volume" - - "--from-version=1.0.0" - - "--to-version=1.1.0" - #VERIFY that you have provided the correct cStor PV Name - - "--pv-name=pvc-c630f6d5-afd2-11e9-8e79-42010a800065" - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - image: quay.io/openebs/m-upgrade:1.1.0 - restartPolicy: OnFailure ---- -``` diff --git a/k8s/upgrades/1.0.0-1.1.0/example-cstor-spc-upgrade-job.yaml b/k8s/upgrades/1.0.0-1.1.0/example-cstor-spc-upgrade-job.yaml deleted file mode 100644 index a9eac73ab3..0000000000 --- a/k8s/upgrades/1.0.0-1.1.0/example-cstor-spc-upgrade-job.yaml +++ /dev/null @@ -1,54 +0,0 @@ -# This is an example YAML for upgrading cstor SPC. -# Some of the values below needs to be changed to -# match your openebs installation. The fields are -# indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - - # VERIFY that you have provided a unique name for this upgrade job. - # The name can be any valid K8s string for name. This example uses - # the following convention: cstor-spc-- - name: cstor-spc-100110-cstor-sparse-pool - - # VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs - -spec: - backoffLimit: 4 - template: - spec: - - # VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - - containers: - - name: upgrade - args: - - "cstor-spc" - - "--from-version=1.0.0" - - "--to-version=1.1.0" - - # VERIFY that you have provided the correct SPC Name - - "--spc-name=sparse-claim-auto" - - # Following are optional parameters - # Log Level - - "--v=4" - - # DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - image: quay.io/openebs/m-upgrade:1.1.0 - restartPolicy: OnFailure ---- diff --git a/k8s/upgrades/1.0.0-1.1.0/example-cstor-volume-upgrade-job.yaml b/k8s/upgrades/1.0.0-1.1.0/example-cstor-volume-upgrade-job.yaml deleted file mode 100644 index 9bf8d97a8e..0000000000 --- a/k8s/upgrades/1.0.0-1.1.0/example-cstor-volume-upgrade-job.yaml +++ /dev/null @@ -1,54 +0,0 @@ -# This is an example YAML for upgrading cstor volume. -# Some of the values below needs to be changed to -# match your openebs installation. The fields are -# indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - - # VERIFY that you have provided a unique name for this upgrade job. - # The name can be any valid K8s string for name. This example uses - # the following convention: cstor-vol-- - name: cstor-vol-100110-pvc-c630f6d5-afd2-11e9-8e79-42010a800065 - - # VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs - -spec: - backoffLimit: 4 - template: - spec: - - # VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - - containers: - - name: upgrade - args: - - "cstor-volume" - - "--from-version=1.0.0" - - "--to-version=1.1.0" - - #VERIFY that you have provided the correct cStor PV Name - - "--pv-name=pvc-c630f6d5-afd2-11e9-8e79-42010a800065" - - #Following are optional parameters - #Log Level - - "--v=4" - - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - image: quay.io/openebs/m-upgrade:1.1.0 - restartPolicy: OnFailure ---- diff --git a/k8s/upgrades/1.0.0-1.1.0/example-jiva-vol-upgrade-job.yaml b/k8s/upgrades/1.0.0-1.1.0/example-jiva-vol-upgrade-job.yaml deleted file mode 100644 index a1ce8f20a8..0000000000 --- a/k8s/upgrades/1.0.0-1.1.0/example-jiva-vol-upgrade-job.yaml +++ /dev/null @@ -1,55 +0,0 @@ -# This is an example YAML for upgrading jiva volume. -# Some of the values below needs to be changed to -# match your openebs installation. The fields are -# indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - - # VERIFY that you have provided a unique name for this upgrade job. - # The name can be any valid K8s string for name. This example uses - # the following convention: jiva-vol-- - name: jiva-vol-100110-pvc-713e3bb6-afd2-11e9-8e79-42010a800065 - - # VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs - -spec: - backoffLimit: 4 - template: - spec: - - # VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - - containers: - - name: upgrade - args: - - "jiva-volume" - - "--from-version=1.0.0" - - "--to-version=1.1.0" - - # VERIFY that you have provided the correct jiva PV Name - - "--pv-name=pvc-713e3bb6-afd2-11e9-8e79-42010a800065" - - # Following are optional parameters - # Log Level - - "--v=4" - - # DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - image: quay.io/openebs/m-upgrade:1.1.0 - restartPolicy: OnFailure ---- - diff --git a/k8s/upgrades/1.12.x-2.12.x/README.md b/k8s/upgrades/1.12.x-2.12.x/README.md deleted file mode 100644 index 143635a821..0000000000 --- a/k8s/upgrades/1.12.x-2.12.x/README.md +++ /dev/null @@ -1,477 +0,0 @@ -# Upgrade OpenEBS - -## Important Notice - -- The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.8.0 and higher) to 2.12.0. If you are running on release older than 1.8.0, OpenEBS recommends you upgrade to the latest version as soon as possible. - -- OpenEBS has deprecated arch specific container images in favor of multi-arch container images. After 2.6.0, the arch specific images are not pushed to Docker or Quay repositories. For example, images like cstor-pool-arm64:2.8.0 should be replaced with corresponding multi-arch image cstor-pool:2.8.0. - -- If you are upgrading Jiva volumes that are running in version 1.6.0 and 1.7.0, you must use these [pre-upgrade steps](https://github.com/openebs/charts/tree/gh-pages/scripts/jiva-tools) to check if your jiva volumes are impacted by [#2956](https://github.com/openebs/openebs/issues/2956). - -### Migration of cStor Pools/Volumes to latest CSPC Pools/CSI based Volumes - -OpenEBS 2.0.0 moves the cStor engine towards `v1` schema and CSI based provisioning. To migrate from old SPC based pools and cStor external-provisioned volume to CSPC based pools and cStor CSI volumes follow the steps mentioned in the [Migration doc](https://github.com/openebs/upgrade/blob/master/docs/migration.md). - -This migration can be performed after upgrading the old OpenEBS resources to `2.0.0` or above. - -### Upgrading CSPC pools and cStor CSI volumes - -If already using CSPC pools and cStor CSI volumes they can be upgraded from `1.10.0` or later to the latest release via steps mentioned in the [Upgrade doc](https://github.com/openebs/upgrade/blob/master/docs/upgrade.md) - -## Overview - -This document describes the steps for the following OpenEBS Upgrade paths: - -- Upgrade from 1.8.0 or later to a newer release up to 2.12.0 - -For other upgrade paths of earlier releases, please refer to the respective directories. -Example: -- the steps to upgrade from 0.9.0 to 1.0.0 will be under [0.9.0-1.0.0](./0.9.0-1.0.0/). -- the steps to upgrade from 1.0.0 or later to a newer release up to 1.12.x will be under [1.x.0-1.12.x](./1.x.0-1.12.x/README.md). - -The upgrade of OpenEBS is a three step process: -- *Step 1* - Prerequisites -- *Step 2* - Upgrade the OpenEBS Control Plane Components -- *Step 3* - Upgrade the OpenEBS Data Plane Components - -### Terminology - -- *OpenEBS Control Plane: Refers to maya-apiserver, openebs-provisioner, node-disk-manager etc along w/ respective RBAC components* -- *OpenEBS Data Plane: Refers to Storage Engine pods like cStor, Jiva controller(aka target) & replica pods* - - -## Step 1: Prerequisites - -**Note: All steps described in this document need to be performed from a machine that has access to Kubernetes master.** - -**Note: It is mandatory to make sure to that all OpenEBS control plane and data plane components are running with the expected version before the upgrade.** - -**Note: If the current version is 2.0.0 or below please run the given command to cleanup old upgradetask resources which can result in [error](https://github.com/openebs/openebs/issues/3392).** -```bash -kubectl -n delete utasks --all -``` - -- **For upgrading to the latest release (2.12.0), the previous version should be minimum 1.6.0** - -- Note down the `namespace` where openebs components are installed. - The following document assumes that namespace to be `openebs`. - -- Note down the `openebs service account`. - The following command will help you to determine the service account name. - ```sh - $ kubectl get deploy -n openebs -l name=maya-apiserver -o jsonpath="{.items[*].spec.template.spec.serviceAccount}" - ``` - The examples in this document assume the service account name is `openebs-maya-operator`. - -- Verify that OpenEBS Control plane is indeed in expected version. Say 1.12.0 - ```sh - $ kubectl get pods -n openebs -l openebs.io/version=1.12.0 - ``` - - The output will list the control plane services mentioned below, as well as some - of the data plane components. - ```sh - NAME READY STATUS RESTARTS AGE - maya-apiserver-7b65b8b74f-r7xvv 1/1 Running 0 2m8s - openebs-admission-server-588b754887-l5krp 1/1 Running 0 2m7s - openebs-localpv-provisioner-77b965466c-wpfgs 1/1 Running 0 85s - openebs-ndm-5mzg9 1/1 Running 0 103s - openebs-ndm-bmjxx 1/1 Running 0 107s - openebs-ndm-operator-5ffdf76bfd-ldxvk 1/1 Running 0 115s - openebs-ndm-v7vd8 1/1 Running 0 114s - openebs-provisioner-678c549559-gh6gm 1/1 Running 0 2m8s - openebs-snapshot-operator-75dc998946-xdskl 2/2 Running 0 2m6s - ``` - - Verify that `apiserver` is listed. If you have installed with helm charts, - the apiserver name may be openebs-apiserver. - -## Step 2: Upgrade the OpenEBS Control Plane - -Upgrade steps vary depending on the way OpenEBS was installed by you. -Below are steps to upgrade using some common ways to install OpenEBS: - -### Prerequisite for control plane upgrade -1. Make sure all the blockdevices that are in use by cstor or localPV are connected to the node. -2. Make sure that all manually created and claimed blockdevices are excluded in the NDM configmap path -filter. - -**NOTE: Upgrade of LocalPV rawblock volumes are not supported. Please exclude it in configmap** - -eg: If partitions or dm devices are used, make sure it is added to the config map. -To edit the config map, run the following command -```bash -kubectl edit cm openebs-ndm-config -n openebs -``` - -Add the partitions or manually created disks into path filter if not already present - -```yaml -- key: path-filter - name: path filter - state: true - include: "" - exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md,/dev/rbd, /dev/sda1, /dev/nvme0n1p1" -``` - -Here, `/dev/sda1` and `/dev/nvm0n1p1` are partitions that are in use and blockdevices were manually created. It needs -to be included in the path filter of configmap - -**Note: If you have any queries or see something unexpected, please reach out to the OpenEBS maintainers via [Github Issue](https://github.com/openebs/openebs/issues) or via #openebs channel on [Kubernetes Slack](https://slack.k8s.io).** - -### Upgrade using kubectl (using openebs-operator.yaml): - -**Use this mode of upgrade only if OpenEBS was installed using openebs-operator.yaml.** - -**The sample steps below will work if you have installed OpenEBS without -modifying the default values in openebs-operator.yaml. If you have customized -the openebs-operator.yaml for your cluster, you will have to download the -desired openebs-operator.yaml and customize it again** - -``` -#Upgrade to OpenEBS control plane components to desired version. Say 2.12.0 -$ kubectl apply -f https://openebs.github.io/charts/2.12.0/openebs-operator.yaml -``` - -### Upgrade using helm chart (using openebs/openebs, openebs-charts repo, etc.,): - -**The sample steps below will work if you have installed openebs with -default values provided by openebs/openebs helm chart.** - -Before upgrading via helm, please review the default values available with -latest openebs/openebs chart. -(https://github.com/openebs/charts/blob/master/charts/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to - update OpenEBS. [More](https://hub.helm.sh/charts/openebs/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values openebs/openebs --version 2.12.0 - ``` -- If not, customize the values into your copy (say custom-values.yaml), - by copying the content from above default yamls and edit the values to - suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade openebs/openebs --version 2.12.0 -f custom-values.yaml` - ``` - -### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with desired -release tags and changes made in the values/templates. After updating the YAML -or helm chart or helm chart values, you can use the above procedures to upgrade -the OpenEBS Control Plane components. - -### After Upgrade -From 2.0.0 onwards, OpenEBS uses a new algorithm to generate the UUIDs for blockdevices to identify any type of disk across the -nodes in the cluster. Therefore, blockdevices that were not used (Unclaimed state) in earlier versions will be made -Inactive and new resources will be created for them. Existing devices that are in use will continue to work normally. - -**Note: After upgrading to 2.0.0 or above. If the devices that were in use before the upgrade are no longer required and becomes unclaimed at any point of time. Please restart NDM daemon pod on that node to sync those devices with the latest changes.** - -## Step 3: Upgrade the OpenEBS Pools and Volumes - -**Note:** -- It is highly recommended to schedule a downtime for the application using the -OpenEBS PV while performing this upgrade. Also, make sure you have taken a -backup of the data before starting the below upgrade procedure. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/t-volume-provisioning.html#recovery-readonly-when-kubelet-is-container -- If the pool and volume images have the prefix `quay.io/openebs/` then please add the flag - ```yaml - - "--to-version-image-prefix=openebs/" - ``` - as the new multi-arch images are not pushed to quay. - It can also be used specify any other private repository or airgap prefix in use. -- Before proceeding with the upgrade of the OpenEBS Data Plane components like cStor or Jiva, verify that OpenEBS Control plane is indeed in desired version - You can use the following command to verify components are in 2.12.0: - ```sh - $ kubectl get pods -n openebs -l openebs.io/version=2.12.0 - ``` - The above command should show that the control plane components are upgrade. - The output should look like below: - ```sh - NAME READY STATUS RESTARTS AGE - maya-apiserver-7b65b8b74f-r7xvv 1/1 Running 0 2m8s - openebs-admission-server-588b754887-l5krp 1/1 Running 0 2m7s - openebs-localpv-provisioner-77b965466c-wpfgs 1/1 Running 0 85s - openebs-ndm-5mzg9 1/1 Running 0 103s - openebs-ndm-bmjxx 1/1 Running 0 107s - openebs-ndm-operator-5ffdf76bfd-ldxvk 1/1 Running 0 115s - openebs-ndm-v7vd8 1/1 Running 0 114s - openebs-provisioner-678c549559-gh6gm 1/1 Running 0 2m8s - openebs-snapshot-operator-75dc998946-xdskl 2/2 Running 0 2m6s - ``` - -**Note: If you have any queries or see something unexpected, please reach out to the OpenEBS maintainers via [Github Issue](https://github.com/openebs/openebs/issues) or via #openebs channel on [Kubernetes Slack](https://slack.k8s.io).** - -As you might have seen by now, control plane components and data plane components -work independently. Even after the OpenEBS Control Plane components have been -upgraded to 1.12.0, the Storage Pools and Volumes (both jiva and cStor) -will continue to work with older versions. - -You can use the below steps for upgrading cstor and jiva components. - -Starting with 1.1.0, the upgrade steps have been changed to eliminate the -need for downloading scripts. You can use `kubectl` to trigger an upgrade job -using Kubernetes Job spec. - -The following instructions provide details on how to create your Upgrade Job specs. -Please ensure the `from` and `to` versions are as per your upgrade path. The below -examples show upgrading from 1.12.0 to 2.12.0. - -### Upgrade the OpenEBS Jiva PV - -**Note:Scaling down the application will speed up the upgrade process and prevent any read only issues. It is highly recommended to scale down the application upgrading from 1.8.0 or earlier versions of the volume.** - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-713e3bb6-afd2-11e9-8e79-42010a800065 5G RWO Delete Bound default/bb-jd-claim openebs-jiva-default 46m -pvc-80c120e8-bd09-4c5e-aaeb-3c37464240c5 4G RWO Delete Bound default/jiva-vol3 jiva-1r 13m -pvc-82a2d097-c666-4f29-820d-6b7e41541c11 4G RWO Delete Bound default/jiva-vol2 jiva-1r 43m - -``` - -Create a Kubernetes Job spec for upgrading the jiva volume. An example spec is as follows: -```yaml -#This is an example YAML for upgrading jiva volume. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: jiva-vol- - name: jiva-vol-1120240 - - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs - -spec: - backoffLimit: 4 - template: - spec: - # VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "jiva-volume" - - # --from-version is the current version of the volume - - "--from-version=1.12.0" - - # --to-version is the version desired upgrade version - - "--to-version=2.12.0" - - # If the pools and volumes images have the prefix `quay.io/openebs/` - # then please add this flag as the new multi-arch images are not pushed to quay. - # It can also be used specify any other private repository or airgap prefix in use. - # "--to-version-image-prefix=openebs/" - - # Bulk upgrade is supported - # To make use of it, please provide the list of PVs - # as mentioned below - - "pvc-1bc3b45a-3023-4a8e-a94b-b457cf9529b4" - - "pvc-82a2d097-c666-4f29-820d-6b7e41541c11" - - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - - # the image version should be same as the --to-version mentioned above - # in the args of the job - image: openebs/m-upgrade: - imagePullPolicy: Always - restartPolicy: OnFailure ---- -``` - -Execute the Upgrade Job Spec -```sh -$ kubectl apply -f jiva-vol-120240.yaml -``` - -You can check the status of the Job using commands like: -```sh -$ kubectl get job -n openebs -$ kubectl get pods -n openebs #to check on the name for the job pod -$ kubectl logs -n openebs jiva-upg-1120240-bgrhx -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -```sh -NAME AGE -cstor-disk-pool 26m -cstor-sparse-pool 24m -``` - -The Job spec for upgrade cstor pools is: - -```yaml -#This is an example YAML for upgrading cstor SPC. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: cstor-spc- - name: cstor-spc-1120240 - - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs -spec: - backoffLimit: 4 - template: - spec: - #VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "cstor-spc" - - # --from-version is the current version of the pool - - "--from-version=1.12.0" - - # --to-version is the version desired upgrade version - - "--to-version=2.12.0" - - # If the pools and volumes images have the prefix `quay.io/openebs/` - # then please add this flag as the new multi-arch images are not pushed to quay. - # It can also be used specify any other private repository or airgap prefix in use. - # "--to-version-image-prefix=openebs/" - - # Bulk upgrade is supported - # To make use of it, please provide the list of SPCs - # as mentioned below - - "cstor-sparse-pool" - - "cstor-disk-pool" - - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - - # the image version should be same as the --to-version mentioned above - # in the args of the job - image: openebs/m-upgrade: - imagePullPolicy: Always - restartPolicy: OnFailure ---- -``` - - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -```sh -$ kubectl get pv -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -pvc-a4aba0e9-8ad3-4d18-9b34-5e6e7cea2eb3 4G RWO Delete Bound default/cstor-disk-vol openebs-cstor-disk 53s -``` - -Create a Kubernetes Job spec for upgrading the cstor volume. An example spec is as follows: -```yaml -#This is an example YAML for upgrading cstor volume. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: cstor-vol- - name: cstor-vol-1120240 - - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs - -spec: - backoffLimit: 4 - template: - spec: - #VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "cstor-volume" - - # --from-version is the current version of the volume - - "--from-version=1.12.0" - - # --to-version is the version desired upgrade version - - "--to-version=2.12.0" - - # If the pools and volumes images have the prefix `quay.io/openebs/` - # then please add this flag as the new multi-arch images are not pushed to quay. - # It can also be used specify any other private repository or airgap prefix in use. - # "--to-version-image-prefix=openebs/" - - # Bulk upgrade is supported from 1.9 - # To make use of it, please provide the list of PVs - # as mentioned below - - "pvc-c630f6d5-afd2-11e9-8e79-42010a800065" - - "pvc-a4aba0e9-8ad3-4d18-9b34-5e6e7cea2eb3" - - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - - # the image version should be same as the --to-version mentioned above - # in the args of the job - image: openebs/m-upgrade: - imagePullPolicy: Always - restartPolicy: OnFailure ---- -``` diff --git a/k8s/upgrades/1.x.0-1.12.x/README.md b/k8s/upgrades/1.x.0-1.12.x/README.md deleted file mode 100644 index cfb3dffae8..0000000000 --- a/k8s/upgrades/1.x.0-1.12.x/README.md +++ /dev/null @@ -1,419 +0,0 @@ -# Upgrade OpenEBS - -## Overview - -This document describes the steps for the following OpenEBS Upgrade paths: - -- Upgrade from 1.0.0 or later to a newer release up to 1.12.0 - -For other upgrade paths, please refer to the respective directories. -Example, the steps to upgrade from 0.9.0 to 1.0.0 will be under [0.9.0-1.0.0](./0.9.0-1.0.0/). - -The upgrade of OpenEBS is a three step process: -- *Step 1* - Prerequisites -- *Step 2* - Upgrade the OpenEBS Control Plane Components -- *Step 3* - Upgrade the OpenEBS Data Plane Components - -### Terminology - -- *OpenEBS Control Plane: Refers to maya-apiserver, openebs-provisioner, etc along w/ respective RBAC components* -- *OpenEBS Data Plane: Refers to Storage Engine pods like cStor, Jiva controller(aka target) & replica pods* - - -## Step 1: Prerequisites - -**Note: It is mandatory to make sure to that all OpenEBS control plane -and data plane components are running with expected version before the upgrade.** -- **For upgrading to the latest release (1.12.0), the previous version should be 1.x.0 (1.0.0 or newer)** - -**Note: All steps described in this document need to be performed from a -machine that has access to Kubernetes master** - -- Note down the `namespace` where openebs components are installed. - The following document assumes that namespace to be `openebs`. - -- Note down the `openebs service account`. - The following command will help you to determine the service account name. - ```sh - $ kubectl get deploy -n openebs -l name=maya-apiserver -o jsonpath="{.items[*].spec.template.spec.serviceAccount}" - ``` - The examples in this document assume the service account name is `openebs-maya-operator`. - -- Verify that OpenEBS Control plane is indeed in expected version. Say 1.0.0 - ```sh - $ kubectl get pods -n openebs -l openebs.io/version=1.0.0 - ``` - - The output will list the control plane services mentioned below, as well as some - of the data plane components. - ```sh - NAME READY STATUS RESTARTS AGE - maya-apiserver-7b65b8b74f-r7xvv 1/1 Running 0 2m8s - openebs-admission-server-588b754887-l5krp 1/1 Running 0 2m7s - openebs-localpv-provisioner-77b965466c-wpfgs 1/1 Running 0 85s - openebs-ndm-5mzg9 1/1 Running 0 103s - openebs-ndm-bmjxx 1/1 Running 0 107s - openebs-ndm-operator-5ffdf76bfd-ldxvk 1/1 Running 0 115s - openebs-ndm-v7vd8 1/1 Running 0 114s - openebs-provisioner-678c549559-gh6gm 1/1 Running 0 2m8s - openebs-snapshot-operator-75dc998946-xdskl 2/2 Running 0 2m6s - ``` - - Verify that `apiserver` is listed. If you have installed with helm charts, - the apiserver name may be openebs-apiserver. - -## Step 2: Upgrade the OpenEBS Control Plane - -Upgrade steps vary depending on the way OpenEBS was installed by you. -Below are steps to upgrade using some common ways to install OpenEBS: - -### Upgrade using kubectl (using openebs-operator.yaml): - -**Use this mode of upgrade only if OpenEBS was installed using openebs-operator.yaml.** - -**The sample steps below will work if you have installed OpenEBS without -modifying the default values in openebs-operator.yaml. If you have customized -the openebs-operator.yaml for your cluster, you will have to download the -desired openebs-operator.yaml and customize it again** - -``` -#Upgrade to OpenEBS control plane components to desired version. Say 1.12.0 -$ kubectl apply -f https://openebs.github.io/charts/1.12.0/openebs-operator.yaml -``` - -### Upgrade using helm chart (using openebs/openebs, openebs-charts repo, etc.,): - -**The sample steps below will work if you have installed openebs with -default values provided by openebs/openebs helm chart.** - -Before upgrading via helm, please review the default values available with -latest openebs/openebs chart. -(https://github.com/openebs/charts/blob/master/charts/openebs/values.yaml). - -- If the default values seem appropriate, you can use the below commands to - update OpenEBS. [More](https://hub.helm.sh/charts/openebs/openebs) details about the specific chart version. - ```sh - $ helm upgrade --reset-values openebs/openebs --version 1.12.0 - ``` -- If not, customize the values into your copy (say custom-values.yaml), - by copying the content from above default yamls and edit the values to - suite your environment. You can upgrade using your custom values using: - ```sh - $ helm upgrade openebs/openebs --version 1.12.0 -f custom-values.yaml` - ``` - -### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with desired -release tags and changes made in the values/templates. After updating the YAML -or helm chart or helm chart values, you can use the above procedures to upgrade -the OpenEBS Control Plane components. - -## Step 3: Upgrade the OpenEBS Pools and Volumes - - -**Note: Upgrade functionality is still under active development. -It is highly recommended to schedule a downtime for the application using the -OpenEBS PV while performing this upgrade. Also, make sure you have taken a -backup of the data before starting the below upgrade procedure.** - -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/troubleshooting.html#recovery-readonly-when-kubelet-is-container - -- automatic rollback option is not provided. To rollback, you need to update - the controller, exporter and replica pod images to the previous version - -**Note: Before proceeding with the upgrade of the OpenEBS Data Plane components -like cStor or Jiva, verify that OpenEBS Control plane is indeed in desired version** - - You can use the following command to verify components are in 1.12.0: - ```sh - $ kubectl get pods -n openebs -l openebs.io/version=1.12.0 - ``` - - The above command should show that the control plane components are upgrade. - The output should look like below: - ```sh - NAME READY STATUS RESTARTS AGE - maya-apiserver-7b65b8b74f-r7xvv 1/1 Running 0 2m8s - openebs-admission-server-588b754887-l5krp 1/1 Running 0 2m7s - openebs-localpv-provisioner-77b965466c-wpfgs 1/1 Running 0 85s - openebs-ndm-5mzg9 1/1 Running 0 103s - openebs-ndm-bmjxx 1/1 Running 0 107s - openebs-ndm-operator-5ffdf76bfd-ldxvk 1/1 Running 0 115s - openebs-ndm-v7vd8 1/1 Running 0 114s - openebs-provisioner-678c549559-gh6gm 1/1 Running 0 2m8s - openebs-snapshot-operator-75dc998946-xdskl 2/2 Running 0 2m6s - ``` - -**Note: If you have any queries or see something unexpected, please reach out to the -OpenEBS maintainers via [Github Issue](https://github.com/openebs/openebs/issues) or via [OpenEBS Slack](https://slack.openebs.io).** - -As you might have seen by now, control plane components and data plane components -work independently. Even after the OpenEBS Control Plane components have been -upgraded to 1.12.0, the Storage Pools and Volumes (both jiva and cStor) -will continue to work with older versions. - -You can use the below steps for upgrading cstor and jiva components. - -Starting with 1.1.0, the upgrade steps have been changed to eliminate the -need for downloading scripts. You can use `kubectl` to trigger an upgrade job -using Kubernetes Job spec. - -The following instructions provide details on how to create your Upgrade Job specs. -Please ensure the `from` and `to` versions are as per your upgrade path. The below -examples show upgrading from 1.0.0 to 1.12.0. - -### Upgrade the OpenEBS Jiva PV - -**Note:Scaling down the application will speed up the upgrade process and prevent any read only issues. It is highly recommended to scale down the application upgrading from 1.8.0 or earlier versions of the volume.** - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-713e3bb6-afd2-11e9-8e79-42010a800065 5G RWO Delete Bound default/bb-jd-claim openebs-jiva-default 46m -pvc-80c120e8-bd09-4c5e-aaeb-3c37464240c5 4G RWO Delete Bound default/jiva-vol3 jiva-1r 13m -pvc-82a2d097-c666-4f29-820d-6b7e41541c11 4G RWO Delete Bound default/jiva-vol2 jiva-1r 43m - -``` - -Create a Kubernetes Job spec for upgrading the jiva volume. An example spec is as follows: -```yaml -#This is an example YAML for upgrading jiva volume. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: jiva-vol- - name: jiva-vol-1001120 - - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs - -spec: - backoffLimit: 4 - template: - spec: - # VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "jiva-volume" - - # --from-version is the current version of the volume - - "--from-version=1.0.0" - - # --to-version is the version desired upgrade version - - "--to-version=1.12.0" - - # Bulk upgrade is supported from 1.9 - # To make use of it, please provide the list of PVs - # as mentioned below - - "pvc-1bc3b45a-3023-4a8e-a94b-b457cf9529b4" - - "pvc-82a2d097-c666-4f29-820d-6b7e41541c11" - # For upgrades older than 1.9.0, use - # '--pv-name= format as - # below commented line - # - "--pv-name=pvc-1bc3b45a-3023-4a8e-a94b-b457cf9529b4" - - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - - # the image version should be same as the --to-version mentioned above - # in the args of the job - image: quay.io/openebs/m-upgrade:1.12.0 - imagePullPolicy: Always - restartPolicy: OnFailure ---- -``` - -Execute the Upgrade Job Spec -```sh -$ kubectl apply -f jiva-vol-1001120.yaml -``` - -You can check the status of the Job using commands like: -```sh -$ kubectl get job -n openebs -$ kubectl get pods -n openebs #to check on the name for the job pod -$ kubectl logs -n openebs jiva-upg-1001120-bgrhx -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -```sh -NAME AGE -cstor-disk-pool 26m -cstor-sparse-pool 24m -``` - -The Job spec for upgrade cstor pools is: - -```yaml -#This is an example YAML for upgrading cstor SPC. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: cstor-spc- - name: cstor-spc-1001120 - - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs -spec: - backoffLimit: 4 - template: - spec: - #VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "cstor-spc" - - # --from-version is the current version of the pool - - "--from-version=1.0.0" - - # --to-version is the version desired upgrade version - - "--to-version=1.12.0" - - # Bulk upgrade is supported from 1.9 - # To make use of it, please provide the list of SPCs - # as mentioned below - - "cstor-sparse-pool" - - "cstor-disk-pool" - # For upgrades older than 1.9.0, use - # '--spc-name= format as - # below commented line - # - "--spc-name=cstor-sparse-pool" - - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - - # the image version should be same as the --to-version mentioned above - # in the args of the job - image: quay.io/openebs/m-upgrade:1.12.0 - imagePullPolicy: Always - restartPolicy: OnFailure ---- -``` - - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -```sh -$ kubectl get pv -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -pvc-a4aba0e9-8ad3-4d18-9b34-5e6e7cea2eb3 4G RWO Delete Bound default/cstor-disk-vol openebs-cstor-disk 53s -``` - -Create a Kubernetes Job spec for upgrading the cstor volume. An example spec is as follows: -```yaml -#This is an example YAML for upgrading cstor volume. -#Some of the values below needs to be changed to -#match your openebs installation. The fields are -#indicated with VERIFY ---- -apiVersion: batch/v1 -kind: Job -metadata: - #VERIFY that you have provided a unique name for this upgrade job. - #The name can be any valid K8s string for name. This example uses - #the following convention: cstor-vol- - name: cstor-vol-1001120 - - #VERIFY the value of namespace is same as the namespace where openebs components - # are installed. You can verify using the command: - # `kubectl get pods -n -l openebs.io/component-name=maya-apiserver` - # The above command should return status of the openebs-apiserver. - namespace: openebs - -spec: - backoffLimit: 4 - template: - spec: - #VERIFY the value of serviceAccountName is pointing to service account - # created within openebs namespace. Use the non-default account. - # by running `kubectl get sa -n ` - serviceAccountName: openebs-maya-operator - containers: - - name: upgrade - args: - - "cstor-volume" - - # --from-version is the current version of the volume - - "--from-version=1.0.0" - - # --to-version is the version desired upgrade version - - "--to-version=1.12.0" - - # Bulk upgrade is supported from 1.9 - # To make use of it, please provide the list of PVs - # as mentioned below - - "pvc-c630f6d5-afd2-11e9-8e79-42010a800065" - - "pvc-a4aba0e9-8ad3-4d18-9b34-5e6e7cea2eb3" - # For upgrades older than 1.9.0, use - # '--pv-name= format as - # below commented line - # - "--pv-name=pvc-c630f6d5-afd2-11e9-8e79-42010a800065" - - #Following are optional parameters - #Log Level - - "--v=4" - #DO NOT CHANGE BELOW PARAMETERS - env: - - name: OPENEBS_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - tty: true - - # the image version should be same as the --to-version mentioned above - # in the args of the job - image: quay.io/openebs/m-upgrade:1.12.0 - imagePullPolicy: Always - restartPolicy: OnFailure ---- -``` diff --git a/k8s/upgrades/README.md b/k8s/upgrades/README.md deleted file mode 100644 index 0273169452..0000000000 --- a/k8s/upgrades/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Upgrade OpenEBS - -## Overview - -This document describes the steps for the following OpenEBS Upgrade paths: - -- Upgrade from 1.8.0 or later to the latest release. - -For other upgrade paths of earlier releases, please refer to the respective directories. -Example: -- the steps to upgrade from 0.9.0 to 1.0.0 will be under [0.9.0-1.0.0](./0.9.0-1.0.0/). -- the steps to upgrade from 1.0.0 or later to a newer release up to 1.12.x will be under [1.x.0-1.12.x](./1.x.0-1.12.x/README.md). -- the steps to upgrade from 1.12.0 or later to a newer release up to 2.12.x will be under [1.12.x-2.12.x](./1.12.x-2.12.x/README.md). - -## Important Notice - -- The community e2e pipelines verify upgrade testing only from non-deprecated releases (1.8.0 and higher) to 3.0.0. If you are running on release older than 1.8.0, OpenEBS recommends you upgrade to the latest version as soon as possible. - -- OpenEBS 3.0.0 deprecates the external provisioned volumes and suggest users to move towards CSI implementations of the respective storage engines (cStor/jiva). The guides below detail the steps to migrate from external-provisioned volumes and upgrade CSI based volumes. - -### Migration of cStor Pools/Volumes to latest CSPC Pools/CSI based Volumes - -OpenEBS 2.0.0 moves the cStor engine towards `v1` schema and CSI based provisioning. To migrate from old SPC based pools and cStor external-provisioned volume to CSPC based pools and cStor CSI volumes follow the steps mentioned in the [Migration documentation](https://github.com/openebs/upgrade/blob/develop/docs/migration.md#migrate-cstor-pools-and-volumes-from-spc-to-cspc). - -This migration can be performed after upgrading the old OpenEBS resources to `2.0.0` or above. - -### Upgrading CSPC pools and cStor CSI volumes - -If already using CSPC pools and cStor CSI volumes they can be upgraded from `1.10.0` or later to the latest release via steps mentioned in the [Upgrade documentation](https://github.com/openebs/upgrade/blob/master/docs/upgrade.md). - -### Migration of jiva Volumes to latest CSI based Volumes - -OpenEBS 2.7.0 introduces the jiva-operator for CSI based provisioning. To migrate from old jiva external-provisioned volume to jiva CSI volumes follow the steps mentioned in the [Migration documentation](https://github.com/openebs/upgrade/blob/develop/docs/migration.md#migrating-jiva-external-provisioned-volumes-to-jiva-csi-volumes). - -This migration can be performed after upgrading the old OpenEBS resources to `2.0.0` or above. - -### Upgrading jiva CSI volumes - -If already using jiva CSI volumes they can be upgraded from `2.7.0` or later to the latest release via steps mentioned in the [Upgrade documentation](https://github.com/openebs/upgrade/blob/develop/docs/upgrade.md#jiva-csi-volumes). diff --git a/k8s/upgrades/dev/README.md b/k8s/upgrades/dev/README.md deleted file mode 100644 index 9913003dd3..0000000000 --- a/k8s/upgrades/dev/README.md +++ /dev/null @@ -1,127 +0,0 @@ -# UPGRADE FROM OPENEBS 0.8.1 TO 0.8.2 - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.8.1 to 0.8.2 - -The upgrade of OpenEBS is a three step process: -- *Step 1* - Checking the openebs version labels -- *Step 2* - Upgrade the OpenEBS Operator -- *Step 3* - Upgrade the OpenEBS Volumes from previous versions (0.8.1) - -#### Note: It is mandatory to make sure to that all volumes are running at version 0.8.1 before the upgrade. - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: Storage Engine pods like cStor or Jiva controller(aka target) & replica pods* - -## Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -### Download the upgrade scripts - -The easiest way to get all the upgrade scripts is via git clone. - -``` -mkdir upgrade-openebs -cd upgrade-openebs -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/upgrades/0.8.1-0.8.2/ -``` - -## Step 1: Checking the openebs version labels - -- Run `./pre-check.sh` to get all the openebs volume resources not having `openebs.io/version` tag. -- Run `./labeltagger.sh 0.8.1` to add `openebs.io/version` label to all the openebs volume resources. - -#### Please make sure that all pods are back to running state before proceeding to Step 2 -### Note: It is ok to get no resources to label in pre-check process. The pre-check is to help users upgrading or upgraded from 0.7. - -## Step 2: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments - -The upgrade steps vary depending on the way OpenEBS was installed, select one of the following: - -#### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you will have to download the 0.8.2 openebs-operator.yaml and customize it again** - -``` -#Upgrade to 0.8.2 OpenEBS Operator -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.2.yaml -``` - -#### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -Before upgrading using helm, please review the default values available with latest stable/openebs chart. (https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the `helm upgrade --reset-values stable/openebs`. -- If not, customize the values into your copy (say custom-values.yaml), by copying the content from above default yamls and edit the values to suite your environment. You can upgrade using your custom values using: -`helm upgrade stable/openebs -f custom-values.yaml` - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.8.2 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.8.2: -- openebs-charts [PR####](https://github.com/openebs/openebs/pull/2352) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 3: Upgrade the OpenEBS Pools and Volumes - -Even after the OpenEBS Operator has been upgraded to 0.8.2, the cStor Storage Pools and volumes (both jiva and cStor) will continue to work with older versions. Use the following steps in the same order to upgrade cStor Pools and volumes. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/readonlyvolumes.html -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -### Upgrade the Jiva based OpenEBS PV - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -``` -./jiva_volume_upgrade.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 -``` - -### Upgrade cStor Pools - -Extract the SPC name using `kubectl get spc` - -``` -NAME AGE -cstor-sparse-pool 24m -``` - -``` -./cstor_pool_upgrade.sh cstor-sparse-pool openebs -``` -Make sure that this step completes successfully before proceeding to next step. - - -### Upgrade cStor Volumes - -Extract the PV name using `kubectl get pv` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-1085415d-f84c-11e8-aadf-42010a8000bb 5G RWO Delete Bound default/demo-cstor-sparse-vol1-claim openebs-cstor-sparse 22m -``` - -``` -./cstor_volume_upgrade.sh pvc-1085415d-f84c-11e8-aadf-42010a8000bb openebs -``` diff --git a/k8s/upgrades/dev/cr-patch.tpl.json b/k8s/upgrades/dev/cr-patch.tpl.json deleted file mode 100644 index 2a01f8a4f8..0000000000 --- a/k8s/upgrades/dev/cr-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - } - } -} diff --git a/k8s/upgrades/dev/cstor-pool-patch.tpl.json b/k8s/upgrades/dev/cstor-pool-patch.tpl.json deleted file mode 100644 index e651d5b5ac..0000000000 --- a/k8s/upgrades/dev/cstor-pool-patch.tpl.json +++ /dev/null @@ -1,63 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@pool_version@" - }, - "annotations": { - "openebs.io/monitoring": "pool_exporter_prometheus", - "prometheus.io/path": "/metrics", - "prometheus.io/port": "9500", - "prometheus.io/scrape": "true" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "cstor-pool", - "image": "quay.io/openebs/cstor-pool:@pool_version@" - }, - { - "name": "cstor-pool-mgmt", - "image": "quay.io/openebs/cstor-pool-mgmt:@pool_version@" - }, - { - "image": "quay.io/openebs/m-exporter:@pool_version@", - "name": "maya-exporter", - "args": [ - "-e=pool" - ], - "command": [ - "maya-exporter" - ], - "ports": [ - { - "containerPort": 9500, - "protocol": "TCP" - } - ], - "volumeMounts": [ - { - "mountPath": "/dev", - "name": "device" - }, - { - "mountPath": "/tmp", - "name": "tmp" - }, - { - "mountPath": "/var/openebs/sparse", - "name": "sparse" - }, - { - "mountPath": "/run/udev", - "name": "udev" - } - ] - } - ] - } - } - } -} diff --git a/k8s/upgrades/dev/cstor-target-patch.tpl.json b/k8s/upgrades/dev/cstor-target-patch.tpl.json deleted file mode 100644 index 080acb8deb..0000000000 --- a/k8s/upgrades/dev/cstor-target-patch.tpl.json +++ /dev/null @@ -1,27 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "cstor-istgt", - "image": "quay.io/openebs/cstor-istgt:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - }, - { - "name": "cstor-volume-mgmt", - "image": "quay.io/openebs/cstor-volume-mgmt:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/dev/cstor-target-svc-patch.tpl.json b/k8s/upgrades/dev/cstor-target-svc-patch.tpl.json deleted file mode 100644 index fab25e0dec..0000000000 --- a/k8s/upgrades/dev/cstor-target-svc-patch.tpl.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/pvc-namespace":"@pvc-namespace@", - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/version": "@target_version@", - "openebs.io/persistent-volume-claim":"@pvc-name@" - } - } -} diff --git a/k8s/upgrades/dev/cstor-volume-patch.tpl.json b/k8s/upgrades/dev/cstor-volume-patch.tpl.json deleted file mode 100644 index 2fce2c64db..0000000000 --- a/k8s/upgrades/dev/cstor-volume-patch.tpl.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/source-volume": "@sourcevolume@", - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/dev/cstor-volume-replica-patch.tpl.json b/k8s/upgrades/dev/cstor-volume-replica-patch.tpl.json deleted file mode 100644 index 8d84c73f9a..0000000000 --- a/k8s/upgrades/dev/cstor-volume-replica-patch.tpl.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "metadata": { - "finalizers": [], - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/dev/cstor_pool_upgrade.sh b/k8s/upgrades/dev/cstor_pool_upgrade.sh deleted file mode 100755 index e0106bd8d8..0000000000 --- a/k8s/upgrades/dev/cstor_pool_upgrade.sh +++ /dev/null @@ -1,154 +0,0 @@ -#!/usr/bin/env bash - -########################################################################### -# STEP: Get SPC name and namespace where OpenEBS is deployed as arguments # -# # -# NOTES: Obtain the pool deployments to perform upgrade operation # -########################################################################### - -pool_upgrade_version="v0.8.x-ci" -current_version="0.8.1" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the SPC name using: kubectl get spc" - echo " Get the namespace where pool pods" - echo " corresponding to SPC are deployed" - exit 1 -} - -##Checking the version of OpenEBS #### -function verify_openebs_version() { - local resource=$1 - local name_res=$2 - local openebs_version=$(kubectl get $resource $name_res -n $ns \ - -o jsonpath="{.metadata.labels.openebs\.io/version}") - - if [[ $openebs_version != $current_version ]] && [[ $openebs_version != $pool_upgrade_version ]]; then - echo "Expected version of $name_res in $resource is $current_version but got $openebs_version";exit 1; - fi - echo $openebs_version -} - -## Starting point -if [ "$#" -ne 2 ]; then - usage -fi - -spc=$1 -ns=$2 - -### Get the deployment pods which are in not running state that are related to provided spc ### -pending_pods=$(kubectl get po -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath='{.items[?(@.status.phase!="Running")].metadata.name}') - -## If any deployments pods are in not running state then exit the upgrade process ### -if [ $(echo $pending_pods | wc -w) -ne 0 ]; then - echo "To continue with upgrade script make sure all the deployment pods corresponding to $spc must be in running state" - exit 1 -fi - -### Get the csp list which are related to the given spc ### -csp_list=$(kubectl get csp -l openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get csp related to spc $spc" - exit 1 -fi - -################################################################ -# STEP: Update patch files with pool upgrade version # -# # -################################################################ - -sed "s/@pool_version@/$pool_upgrade_version/g" cr-patch.tpl.json > cr_patch.json - -echo "Patching the csp resource" -for csp in `echo $csp_list | tr ":" " "`; do - version=$(verify_openebs_version "csp" $csp) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - ## Patching the csp resource - kubectl patch csp $csp -p "$(cat cr_patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Error occurred while upgrading the csp: $csp Exit Code: $rc"; exit; fi -done - -echo "Patching Pool Deployment with new image" -for csp in `echo $csp_list | tr ":" " "`; do - ## Get the pool deployment corresponding to csp - pool_dep=$(kubectl get deploy -n $ns \ - -l app=cstor-pool,openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{.items[?(@.metadata.labels.openebs\.io/cstor-pool=='$csp')].metadata.name}") - - version=$(verify_openebs_version "deploy" $pool_dep) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - - ## Get the replica set corresponding to the deployment ## - pool_rs=$(kubectl get rs -n $ns \ - -o jsonpath="{range .items[?(@.metadata.ownerReferences[0].name=='$pool_dep')]}{@.metadata.name}{end}") - echo "$pool_dep -> rs is $pool_rs" - - - ## Modifies the cstor-pool-patch template with the original values ## - sed "s/@pool_version@/$pool_upgrade_version/g" cstor-pool-patch.tpl.json > cstor-pool-patch.json - - ## Patch the deployment file ### - kubectl patch deployment --namespace $ns $pool_dep -p "$(cat cstor-pool-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: Failed to patch $pool_dep $rc"; exit; fi - rollout_status=$(kubectl rollout status --namespace $ns deployment/$pool_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo "ERROR: Failed to rollout status for $pool_dep error: $rc"; exit; fi - - ## Deleting the old replica set corresponding to deployment - kubectl delete rs $pool_rs --namespace $ns - - ## Cleaning the temporary patch file - rm cstor-pool-patch.json -done - -### Get the sp list which are related to the given spc ### -sp_list=$(kubectl get sp -l openebs.io/cas-type=cstor,openebs.io/storage-pool-claim=$spc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get sp related to spc $spc" - exit 1 -fi - -### Patch sp resource### -echo "Patching the SP resource" -for sp in `echo $sp_list | tr ":" " "`; do - version=$(verify_openebs_version "sp" $sp) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - elif [ $version == $pool_upgrade_version ]; then - continue - fi - kubectl patch sp $sp -p "$(cat cr_patch.json)" --type=merge - rc=$? - if [ $rc -ne 0 ]; then echo "Error: failed to patch for SP resource $sp Exit Code: $rc"; exit; fi -done - -###Cleaning temporary patch file -rm cr_patch.json - -echo "Successfully upgraded $spc to $pool_upgrade_version" -echo "Running post pool upgrade scripts for $spc..." - -exit 0 diff --git a/k8s/upgrades/dev/cstor_volume_upgrade.sh b/k8s/upgrades/dev/cstor_volume_upgrade.sh deleted file mode 100755 index d41fa0ba10..0000000000 --- a/k8s/upgrades/dev/cstor_volume_upgrade.sh +++ /dev/null @@ -1,187 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ -volume_upgrade_version="v0.8.x-ci" -volume_current_version="0.8.1" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - echo " Get the namespace where openebs" - echo " pods are installed" - exit 1 -} - -##Checking the version of OpenEBS #### -function verify_volume_version() { - local resource=$1 - local name_res=$2 - local openebs_version=$(kubectl get $resource $name_res -n $ns \ - -o jsonpath="{.metadata.labels.openebs\.io/version}") - - if [[ $openebs_version != $volume_current_version ]] && [[ $openebs_version != $volume_upgrade_version ]]; then - echo "Expected version of $name_res in $resource is $volume_current_version but got $openebs_version";exit 1; - fi - echo $openebs_version -} - -if [ "$#" -ne 2 ]; then - usage -fi - -pv=$1 -ns=$2 - -source snapshotdata_upgrade.sh -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found";exit 1; -fi - -## Get storageclass and PVC related details to patch target service -sc_ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -sc_name=`kubectl get pv $pv -o jsonpath="{.spec.storageClassName}"` -sc_res_ver=`kubectl get sc $sc_name -n $sc_ns -o jsonpath="{.metadata.resourceVersion}"` -pvc_name=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` -pvc_namespace=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` - -# Check if CASType is cstor for PV -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.labels.openebs\.io/cas-type}"` -if [ $cas_type != "cstor" ]; then - echo "Cstor volume not found";exit 1; -fi - -### 1. Get the cstorvolume name related to the given PV ### -### get the cloned volume cstorvolume name if exists -echo "Upgrading Cstor Volume resource $volume_upgrade_version" -cv_name=$(kubectl get cvr -n openebs\ - -l openebs.io/persistent-volume=$pv\ - -o 'jsonpath={.items[?(@.metadata.labels.openebs\.io/cloned=="true")].metadata.annotations.openebs\.io/source-volume}' | awk '{print $1}') - -version=$(verify_volume_version "cstorvolume" $pv) -rc=$? -if [ $rc -ne 0 ]; then - exit 1 -fi - -## 2. Update cstorvolume patch file with volume upgrade version. -## if cstorvolume(cv_name) name is nil, update the patch file only with version -## details, elsse update patch file with version and source-volume label details -if [ -z $cv_name ]; then - sed "s|\"openebs.io/source-volume\": \"@sourcevolume@\",||g" cstor-volume-patch.tpl.json | sed "s|@target_version@|$volume_upgrade_version|g" > cstor-volume-patch.json - else - sed "s|@sourcevolume@|$cv_name|g" cstor-volume-patch.tpl.json | sed "s/@target_version@/$volume_upgrade_version/g" > cstor-volume-patch.json -fi - ## 3. Patching the cstorvolume resource - kubectl patch cstorvolume $pv -n $ns -p "$(cat cstor-volume-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Error occurred while upgrading cstorvolume: $cv_name Exit Code: $rc"; exit; fi - - ## 4. Remove the temporary patch file - rm cstor-volume-patch.json - - -### 1. Get the cstorvolume name related to the given PV ### -echo "Upgrading Target Service to $volume_upgrade_version" -c_svc=$(kubectl get svc -n $ns\ - -l openebs.io/persistent-volume=$pv,openebs.io/target-service=cstor-target-svc\ - -o jsonpath="{.items[*].metadata.name}") - -version=$(verify_volume_version "service" $pv) -rc=$? -if [ $rc -ne 0 ]; then - exit 1 -fi - - ## 2. Update target svc patch file with upgrade version and pvc name - ## namespace details - sed "s/@sc_name@/$sc_name/g" cstor-target-svc-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@target_version@/$volume_upgrade_version/g" | sed "s/@pvc-name@/$pvc_name/g" | sed "s/@pvc-namespace@/$pvc_namespace/g" > cstor-target-svc-patch.json - - ## 3. Patching the target service - kubectl patch service --namespace $ns $c_svc -p "$(cat cstor-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch service $pv | Exit code: $rc"; exit; fi - -rm cstor-target-svc-patch.json - -### 1. Get the cvr list which are related to the given PV ### -echo "Upgrading CstorVolume-Replica resource to $volume_upgrade_version" -cvr_list=$(kubectl get cvr -n $ns -l openebs.io/persistent-volume=$pv\ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}") - -rc=$? -if [ $rc -ne 0 ]; then - echo "Failed to get cstorvolume-replica related to PV $pv" - exit 1 -fi - -for cvr in `echo $cvr_list | tr ":" " "`; do - version=$(verify_volume_version "cvr" $cvr) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - fi - - ## 2. Update cstorvolume-replica patch file with volume upgrade version - sed "s/@target_version@/$volume_upgrade_version/g" cstor-volume-replica-patch.tpl.json > cstor-volume-replica-patch.json - - ## 3. Patching the cvr resource - kubectl patch cvr $cvr --namespace openebs -p "$(cat cstor-volume-replica-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstorvolume-replica $cvr | Exit code: $rc"; exit; fi - echo "Successfully updated cstorvolume-replica: $cvr at $volume_upgrade_version" - - ## 4. Remove the temporary patch file - rm cstor-volume-replica-patch.json - -done - -### 1. Get the cstorvolume deployment related to the given PV ### -echo "Upgrading CstorVolume Deployment with new image version $volume_upgrade_version" -cv_deploy=$(kubectl get deploy -n $ns \ - -l openebs.io/persistent-volume=$pv,openebs.io/target=cstor-target \ - -o jsonpath="{.items[*].metadata.name}") - -cv_rs=$(kubectl get rs -n $ns -o name \ - -l openebs.io/persistent-volume=$pv | cut -d '/' -f 2) - -version=$(verify_volume_version "deploy" $cv_deploy) - rc=$? - if [ $rc -ne 0 ]; then - exit 1 - fi - - ## 2. Update cstorvolume target patch file with volume upgrade version - sed "s/@target_version@/$volume_upgrade_version/g" cstor-target-patch.tpl.json > cstor-target-patch.json - - ## 3. Update cstorvolume deployment using patch file with upgraded image version - kubectl patch deployment --namespace $ns $cv_deploy -p "$(cat cstor-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch cstor target deployment $cv_deploy | Exit code: $rc"; exit; fi - - ## 3. Deleting the old replica set corresponding to deployment - kubectl delete rs $cv_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete cstor replica set $c_rs | Exit code: $rc"; exit; fi - - ## 4. Check the rollout status of a cstorvolume deployment - rollout_status=$(kubectl rollout status --namespace $ns deployment/$cv_deploy) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo "Failed to rollout for deployment $c_dep | Exit code: $rc"; exit; fi - - ## 5. Remove the temporary patch file - rm cstor-target-patch.json - -##Patch cstor snapshotdata crs related to pv -run_snapshotdata_upgrades $pv -rc=$? -if [ $rc -ne 0 ]; then - exit 1 -fi - -echo "Successfully upgraded $pv to $target_upgrade_version Please run your application checks." -exit 0 diff --git a/k8s/upgrades/dev/jiva-replica-patch.tpl.json b/k8s/upgrades/dev/jiva-replica-patch.tpl.json deleted file mode 100644 index 7260b47221..0000000000 --- a/k8s/upgrades/dev/jiva-replica-patch.tpl.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "@r_name@", - "image": "quay.io/openebs/jiva:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/dev/jiva-target-patch.tpl.json b/k8s/upgrades/dev/jiva-target-patch.tpl.json deleted file mode 100644 index cc92bb8210..0000000000 --- a/k8s/upgrades/dev/jiva-target-patch.tpl.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "template": { - "spec": { - "containers": [ - { - "name": "@c_name@", - "image": "quay.io/openebs/jiva:@target_version@" - }, - { - "name": "maya-volume-exporter", - "image": "quay.io/openebs/m-exporter:@target_version@" - } - ] - } - } - } -} diff --git a/k8s/upgrades/dev/jiva-target-svc-patch.tpl.json b/k8s/upgrades/dev/jiva-target-svc-patch.tpl.json deleted file mode 100644 index c39df1ba91..0000000000 --- a/k8s/upgrades/dev/jiva-target-svc-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "metadata": { - "labels": { - "openebs.io/version": "@target_version@" - } - } -} diff --git a/k8s/upgrades/dev/jiva_volume_upgrade.sh b/k8s/upgrades/dev/jiva_volume_upgrade.sh deleted file mode 100755 index 38579e4895..0000000000 --- a/k8s/upgrades/dev/jiva_volume_upgrade.sh +++ /dev/null @@ -1,227 +0,0 @@ -#!/usr/bin/env bash -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -target_upgrade_version="0.8.2" -current_version="0.8.1" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -function setDeploymentRecreateStrategy() { - dns=$1 # deployment namespace - dn=$2 # deployment name - currStrategy=`kubectl get deploy -n $dns $dn -o jsonpath="{.spec.strategy.type}"` - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get the deployment stratergy for $dn | Exit code: $rc"; exit; fi - - if [ $currStrategy != "Recreate" ]; then - kubectl patch deployment --namespace $dns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $dn | Exit code: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 -replica_node_label="openebs-jiva" - -source snapshotdata_upgrade.sh - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found";exit 1; -fi - -# Check if CASType is jiva -cas_type=`kubectl get pv $pv -o jsonpath="{.metadata.annotations.openebs\.io/cas-type}"` -if [ $cas_type != "jiva" ]; then - echo "Jiva volume not found";exit 1; -fi - -ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -sc_name=`kubectl get pv $pv -o jsonpath="{.spec.storageClassName}"` -sc_res_ver=`kubectl get sc $sc_name -n $ns -o jsonpath="{.metadata.resourceVersion}"` - -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc" # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -################################################################# - -c_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/controller=jiva-controller -o jsonpath="{.items[*].metadata.name}") -r_dep=$(kubectl get deploy -n $ns -l openebs.io/persistent-volume=$pv,openebs.io/replica=jiva-replica -o jsonpath="{.items[*].metadata.name}") -c_svc=$(kubectl get svc -n $ns -l openebs.io/persistent-volume=$pv -o jsonpath="{.items[*].metadata.name}") -c_name=$(kubectl get deploy -n $ns $c_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "con") -r_name=$(kubectl get deploy -n $ns $r_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "con") - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/controller=jiva-controller | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/replica=jiva-replica | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -# Check if openebs resources exist and provisioned version is 0.8 - -if [[ -z $c_rs ]]; then - echo "Target Replica set not found"; exit 1; -fi - -if [[ -z $r_rs ]]; then - echo "Replica Replica set not found"; exit 1; -fi - -if [[ -z $c_dep ]]; then - echo "Target deployment not found"; exit 1; -fi - -if [[ -z $r_dep ]]; then - echo "Replica deployment not found"; exit 1; -fi - -if [[ -z $c_svc ]]; then - echo "Target service not found"; exit 1; -fi - -if [[ -z $r_name ]]; then - echo "Replica container not found"; exit 1; -fi - -if [[ -z $c_name ]]; then - echo "Target container not found"; exit 1; -fi - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" != "$current_version" ]] && [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Current Target deployment $c_dep version is not $current_version or $target_upgrade_version";exit 1; -fi -replica_version=`kubectl get deployment $r_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$replica_version" != "$current_version" ]] && [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Current Replica deployment $r_dep version is not $current_version or $target_upgrade_version";exit 1; -fi - -controller_svc_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_svc_version" != $current_version ]] && [[ "$controller_svc_version" != "$target_upgrade_version" ]] ; then - echo "Current Target service $c_svc version is not $current_version or $target_upgrade_version";exit 1; -fi - -# Get the number of replicas configured. -# This field is currently not used, but can add additional validations -# based on the nodes and expected number of replicas -rep_count=`kubectl get deploy $r_dep --namespace $ns -o jsonpath="{.spec.replicas}"` - -# Get the list of nodes where replica pods are running, delimited by ':' -rep_nodenames=`kubectl get pods -n $ns \ - -l "openebs.io/persistent-volume=$pv" -l "openebs.io/replica=jiva-replica" \ - -o jsonpath="{range .items[*]}{@.spec.nodeName}:{end}"` - -echo "Checking if the node with replica pod has been labeled with $replica_node_label" -for rep_node in `echo $rep_nodenames | tr ":" " "`; do - nl="";nl=`kubectl get nodes $rep_node -o jsonpath="{.metadata.labels.openebs-pv-$pv}"` - if [ -z "$nl" ]; - then - echo "Labeling $rep_node"; - kubectl label node $rep_node "openebs-pv-${pv}=$replica_node_label" - fi -done - - -sed -u "s/@r_name@/$r_name/g" | sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-replica-patch.json -sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-target-patch.json -sed -u "s/@target_version@/$target_upgrade_version/g" > jiva-target-svc-patch.json - -################################################################################# -# STEP: Patch OpenEBS volume deployments (jiva-target, jiva-replica & jiva-svc) # -################################################################################# - -# PATCH JIVA REPLICA DEPLOYMENT #### -if [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Replica Deployment to $target_upgrade_version" - - # Setting the update stratergy to recreate - setDeploymentRecreateStrategy $ns $r_dep - - kubectl patch deployment --namespace $ns $r_dep -p "$(cat jiva-replica-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $r_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $r_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete ReplicaSet $r_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " RollOut for $r_dep failed | Exit code: $rc"; exit; fi -else - echo "Replica Deployment $r_dep is already at $target_upgrade_version" -fi - -# #### PATCH TARGET DEPLOYMENT #### -if [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Deployment to $target_upgrade_version" - - # Setting the update stratergy to recreate - setDeploymentRecreateStrategy $ns $c_dep - - kubectl patch deployment --namespace $ns $c_dep -p "$(cat jiva-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " Failed to patch the deployment | Exit code: $rc"; exit; fi -else - echo "Controller Deployment $c_dep is already at $target_upgrade_version" - -fi - -# #### PATCH TARGET SERVICE #### -if [[ "$controller_svc_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Service to $target_upgrade_version" - kubectl patch service --namespace $ns $c_svc -p "$(cat jiva-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the service $svc | Exit code: $rc"; exit; fi -else - echo "Controller service $c_svc is already at $target_upgrade_version" -fi - -##Patch jiva snapshotdata crs related to pv -run_snapshotdata_upgrades $pv -rc=$? -if [ $rc -ne 0 ]; then - exit 1 -fi - -echo "Clearing temporary files" -rm jiva-replica-patch.json -rm jiva-target-patch.json -rm jiva-target-svc-patch.json - -echo "Successfully upgraded $pv to $target_upgrade_version Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/dev/labeltagger.sh b/k8s/upgrades/dev/labeltagger.sh deleted file mode 100755 index 228654fd2b..0000000000 --- a/k8s/upgrades/dev/labeltagger.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env bash -##################################################################### -# NOTES: This script finds unlabeled volume resources of openebs # -##################################################################### - -function usage() { - echo - echo "Usage: This script adds openebs.io/version label to unlabeled volume resources of openebs" - echo - echo "$0 " - echo - echo "Example: $0 0.8.1" - exit 1 -} - -if [ "$#" -ne 1 ]; then - usage -fi - -currentVersion=$1 -echo $currentVersion - -echo "#!/usr/bin/env bash" > label.sh -echo "set -e" >> label.sh - -echo "##### Creating the tag script #####" -# Adding cstor resources -kubectl get cstorvolume --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get cstorvolumereplicas --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/target-service in (cstor-target-svc)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/target in (cstor-target)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh - -# Adding jiva resources -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/controller-service in (jiva-controller-svc)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/replica in (replica)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/controller in (jiva-controller)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh - -# Adding pool resources -kubectl get csp -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), app in (cstor-pool)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion -n {@.metadata.namespace} --overwrite=true;{end}" | tr ";" "\n" >> label.sh -kubectl get sp -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/cas-type in (cstor)' -o jsonpath="{range .items[*]}kubectl label {@.kind} {@.metadata.name} openebs.io/version=$currentVersion --overwrite=true;{end}" | tr ";" "\n" >> label.sh - -# Running the label.sh -chmod +x ./label.sh -./label.sh - -# Removing the generated script -rm label.sh diff --git a/k8s/upgrades/dev/pre-check.sh b/k8s/upgrades/dev/pre-check.sh deleted file mode 100755 index 99837d9d6a..0000000000 --- a/k8s/upgrades/dev/pre-check.sh +++ /dev/null @@ -1,71 +0,0 @@ -#!/usr/bin/env bash -##################################################################### -# NOTES: This script finds unlabeled volume resources of openebs # -##################################################################### - -# Search of CStor Resources -printf "############## Unlabeled CStor Volumes Resources ##############\n\n" - -printf "CStor Volumes:\n" -echo "--------------" -printf "\n" -# Search for CStor Volumes -kubectl get cstorvolume --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' - -printf "\nCStor Volumes Replicas:\n" -echo "-----------------------" -printf "\n" -# Search for CStor Volume Replicas -kubectl get cstorvolumereplicas --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' - -printf "\nCStor Target service:\n" -echo "---------------------" -printf "\n" -# Search for CStor Target Service -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/target-service in (cstor-target-svc)' - -printf "\nCStor Target Deployment:\n" -echo "---------------------" -printf "\n" -# Search for CStor Target Service -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/target in (cstor-target)' - - -printf "\n\n############## unlabeled Jiva Volumes Resources ##############\n\n" - -printf "\nJiva Controller service:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Controller Services -kubectl get service --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/controller-service in (jiva-controller-svc)' - -printf "\nJiva Replica Deployment:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Replica Deployment -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/replica in (replica)' - -printf "\nJiva Controller Deployment:\n" -echo "------------------------" -printf "\n" -# Search for Jiva Controller Deployment -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/controller in (jiva-controller)' - -printf "\n\n############## Storage Pool Resources ##############\n\n" - -printf "\nCStor Pool:\n" -echo "-----------" -printf "\n" -kubectl get csp -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1)' - -printf "\nCStor Pool Deployments:\n" -echo "-----------------------" -printf "\n" -kubectl get deployment --all-namespaces -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), app in (cstor-pool)' - -printf "\nStorge Pool:\n" -echo "------------" -printf "\n" -kubectl get sp -l 'openebs.io/version notin (0.8.2), openebs.io/version notin (0.8.1), openebs.io/cas-type in (cstor)' - -printf "Note: The unlabeled resources can be tagged with correct version of openebs using labeltagger.sh.\n Example: ./labeltagger.sh 0.8.1" diff --git a/k8s/upgrades/dev/snapshotdata_upgrade.sh b/k8s/upgrades/dev/snapshotdata_upgrade.sh deleted file mode 100755 index d4b9f3b5ca..0000000000 --- a/k8s/upgrades/dev/snapshotdata_upgrade.sh +++ /dev/null @@ -1,38 +0,0 @@ -run_snapshotdata_upgrades() -{ - if [ $# -eq 1 ]; then - pv=$1 - else - echo "please pass persistentVolume name got pv: $pv" - exit 1 - fi - # Get the list of volumesnapshotdata related to given PV - volumesnapshotdata_list=$(kubectl get volumesnapshotdata\ - -o jsonpath="{range .items[?(@.spec.persistentVolumeRef.name=='$pv')]}{@.metadata.name}:{end}") - rc=$? - if [ $rc -ne 0 ]; then - echo "failed to get snapshotdata name list" - exit 1 - fi - - if [ ! -z $volumesnapshotdata_list ]; then - - pv_size="" - pv_size=$(kubectl get pv $pv -o jsonpath='{.spec.capacity.storage}') - rc=$? - if [ $rc -ne 0 ]; then - echo "failed to get pv: $pv size" - exit 1 - fi - - ## update volumesnapshotdata-patch.tpl.json with pv size - sed "s|@size@|$pv_size|g" volumesnapshotdata-patch.tpl.json > volumesnapshotdata-patch.json - for snapdata_name in `echo $volumesnapshotdata_list | tr ":" " "`; do - ## patch volumesnapshotdata cr ### - kubectl patch volumesnapshotdata $snapdata_name -p "$(cat volumesnapshotdata-patch.json)" --type=merge - rc=$?; if [ $rc -ne 0 ]; then echo "Error occurred while upgrading volumesnapshotdata name: $snapdata_name Exit Code: $rc"; exit; fi - done - ## Removes temporary file - rm volumesnapshotdata-patch.json - fi -} diff --git a/k8s/upgrades/dev/volumesnapshotdata-patch.tpl.json b/k8s/upgrades/dev/volumesnapshotdata-patch.tpl.json deleted file mode 100644 index aa41f92321..0000000000 --- a/k8s/upgrades/dev/volumesnapshotdata-patch.tpl.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "spec": { - "openebsVolume": { - "capacity": "@size@" - } - } -} diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/README.md b/k8s/upgrades/jiva-0.6.0-0.8.1/README.md deleted file mode 100644 index 90c885a31f..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/README.md +++ /dev/null @@ -1,147 +0,0 @@ -# UPGRADE FROM OPENEBS 0.6.0 TO 0.8.1 - -## Overview - -This document describes the steps for upgrading OpenEBS from 0.6.0 to 0.8.1 - -The upgrade of OpenEBS is a two step process: -- *Step 1* - Upgrade the OpenEBS Operator -- *Step 2* - Upgrade the OpenEBS Volumes from previous versions 0.6.0 - -### Terminology -- *OpenEBS Operator : Refers to maya-apiserver & openebs-provisioner along w/ respective services, service a/c, roles, rolebindings* -- *OpenEBS Volume: The Jiva controller(aka target) & replica pods* - -## Prerequisites - -*All steps described in this document need to be performed on the Kubernetes master or from a machine that has access to Kubernetes master* - -### Download the upgrade scripts - -You can either `git clone` or download the upgrade scripts. - -``` -mkdir upgrade-openebs -cd upgrade-openebs -git clone https://github.com/openebs/openebs.git -cd openebs/k8s/upgrade/jiva-0.6.0-0.8.1/ -``` - -### Breaking Changes in 0.7.x - -#### Default Jiva Storage Pool -OpenEBS 0.7.0 auto installs a default Jiva Storage Pool and a default Storage Class named `default` and `openebs-jiva-default` respectively. If you have a storage pool named `default` created in earlier version, you will have to re-apply your Storage Pool after the upgrade is completed. - -Before upgrading the OpenEBS Operator, check if you are using a storage pool named `default` which will conflict with default jiva pool installed with OpenEBS 0.8.1: -``` -./pre_upgrade.sh -``` - -#### Storage Classes -OpenEBS supports specified Storage Policies in Storage Classes. The way storage policies are specified has changed in 0.7.x. The policies will have to be specified under metadata instead of parameters. - -For example, if your storage class looks like this in 0.6.0: -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-mongodb -provisioner: openebs.io/provisioner-iscsi -parameters: - openebs.io/storage-pool: "default" - openebs.io/jiva-replica-count: "3" - openebs.io/volume-monitor: "true" - openebs.io/capacity: 5G - openebs.io/fstype: "xfs" -``` - -There is no need to mention the volume-monitor and capacity with 0.7.0. The remaining policies like storage pool, replica count and the fstype should be specified as follows: -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-mongodb - annotations: - cas.openebs.io/config: | - - name: ReplicaCount - value: "3" - - name: StoragePool - value: default - - name: FSType - value: "xfs" -provisioner: openebs.io/provisioner-iscsi -``` - -Make edits to your Storage Class YAMLs - delete them and add them back. A delete and re-apply is required since updates to Storage Class parameters are not possible. - -If you are using `ext4` for FSType, you could use the following script to upgrade your StorageClasses. -``` -./upgrade_sc.sh -``` - -Alternatively, you can skip this step and re-apply your StorageClasses as per the 0.7.0 volume policy specification. - -**Important Note: StorageClasses have to updated prior to provisioning any new volumes with 0.7.0.** - -## Step 1: Upgrade the OpenEBS Operator - -### Upgrading OpenEBS Operator CRDs and Deployments - -The upgrade steps vary depending on the way OpenEBS was installed, select one of the following: - -#### Install/Upgrade using kubectl (using openebs-operator.yaml ) - -**The sample steps below will work if you have installed openebs without modifying the default values in openebs-operator.yaml. If you have customized it for your cluster, you will have to download the 0.7.0 openebs-operator.yaml and customize it again** - -``` -#Upgrade to 0.8.1 OpenEBS Operator -kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.8.1.yaml -``` - -#### Install/Upgrade using helm chart (using stable/openebs, openebs-charts repo, etc.,) - -**The sample steps below will work if you have installed openebs with default values provided by stable/openebs helm chart.** - -Before upgrading using helm, please review the default values available with latest stable/openebs chart. (https://raw.githubusercontent.com/helm/charts/master/stable/openebs/values.yaml). - -- If the default values seem appropriate, you can use the `helm upgrade --reset-values stable/openebs`. -- If not, customize the values into your copy (say custom-values.yaml), by copying the content from above default yamls and edit the values to suite your environment. You can upgrade using your custom values using: -`helm upgrade stable/openebs -f custom-values.yaml` - -#### Using customized operator YAML or helm chart. -As a first step, you must update your custom helm chart or YAML with 0.8.1 release tags and changes made in the values/templates. - -You can use the following as references to know about the changes in 0.8.1: -- openebs-charts [PR#2352](https://github.com/openebs/openebs/pull/2352) as reference. - -After updating the YAML or helm chart or helm chart values, you can use the above procedures to upgrade the OpenEBS Operator - -## Step 2: Upgrade the OpenEBS Volumes - -Even after the OpenEBS Operator has been upgraded to 0.8.1, the volumes will continue to work with older versions. Each of the volumes should be upgraded (one at a time) to 0.8.1, using the steps provided below. - -*Note: Upgrade functionality is still under active development. It is highly recommended to schedule a downtime for the application using the OpenEBS PV while performing this upgrade. Also, make sure you have taken a backup of the data before starting the below upgrade procedure.* - -Limitations: -- this is a preliminary script only intended for using on volumes where data has been backed-up. -- please have the following link handy in case the volume gets into read-only during upgrade - https://docs.openebs.io/docs/next/readonlyvolumes.html -- automatic rollback option is not provided. To rollback, you need to update the controller, exporter and replica pod images to the previous version -- in the process of running the below steps, if you run into issues, you can always reach us on slack - - -``` -kubectl get pv -``` - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-48fb36a2-947f-11e8-b1f3-42010a800004 5G RWO Delete Bound percona-test/demo-vol1-claim openebs-percona 8m -``` - -### Upgrade the PV that needs to be upgraded. - -``` -./jiva_volume_upgrade.sh pvc-48fb36a2-947f-11e8-b1f3-42010a800004 -``` - diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-replica-patch.tpl.json b/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-replica-patch.tpl.json deleted file mode 100644 index eaa5f4276d..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-replica-patch.tpl.json +++ /dev/null @@ -1,119 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/capacity": "@capacity@", - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n", - "openebs.io/storage-pool": "TODO" - }, - "labels": { - "openebs.io/cas-type": "jiva", - "openebs.io/persistent-volume": "@pv-name@", - "openebs.io/persistent-volume-claim": "@pvc-name@", - "openebs.io/replica": "jiva-replica", - "openebs.io/storage-engine-type": "jiva", - "vsm": "deprecated", - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "selector": { - "matchLabels": { - "openebs.io/persistent-volume": "@pv-name@", - "openebs.io/replica": "jiva-replica", - "vsm": "deprecated" - } - }, - "template": { - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/persistent-volume": "@pv-name@", - "openebs.io/persistent-volume-claim": "@pvc-name@", - "openebs.io/replica": "jiva-replica", - "vsm": "deprecated" - } - }, - "spec": { - "containers":[ - { - "name": "@r_name@", - "image": "quay.io/openebs/jiva:@target_version@" - } - ], - "nodeSelector": { - "openebs-pv-@pv-name@": "@replica_node_label@" - }, - "affinity": { - "podAntiAffinity": { - "requiredDuringSchedulingIgnoredDuringExecution" : [ - { - "labelSelector": { - "matchLabels": { - "openebs.io/replica": "jiva-replica", - "openebs.io/persistent-volume": "@pv-name@" - } - }, - "topologyKey": "kubernetes.io/hostname" - } - ] - } - }, - "tolerations": [ - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/notReady", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/unreachable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/not-ready", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unreachable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/out-of-disk", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/memory-pressure", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/disk-pressure", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/network-unavailable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unschedulable", - "operator": "Exists" - }, - { - "effect": "NoExecute", - "key": "node.cloudprovider.kubernetes.io/uninitialized", - "operator": "Exists" - } - ] - } - } - } - } - \ No newline at end of file diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-target-patch.tpl.json b/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-target-patch.tpl.json deleted file mode 100644 index bca561fb1a..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-target-patch.tpl.json +++ /dev/null @@ -1,92 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/fs-type": "ext4", - "openebs.io/lun": "0", - "openebs.io/volume-monitor": "true", - "openebs.io/volume-type": "jiva", - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/cas-type": "jiva", - "openebs.io/storage-engine-type": "jiva", - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name@", - "openebs.io/persistent-volume-claim": "@pvc-name@", - "vsm": "deprecated", - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "selector": { - "matchLabels": { - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name@", - "vsm": "deprecated" - } - }, - "template": { - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n", - "prometheus.io/path": "/metrics", - "prometheus.io/port": "9500", - "prometheus.io/scrape": "true" - }, - "labels": { - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name@", - "openebs.io/persistent-volume-claim": "@pvc-name@", - "vsm": "deprecated" - } - }, - "spec": { - "containers": [ - { - "name": "@c_name@", - "image": "quay.io/openebs/jiva:@target_version@", - "env": [ - { - "name": "REPLICATION_FACTOR", - "value": "@rep_count@" - } - ] - }, - { - "name": "maya-volume-exporter", - "command": [ - "maya-exporter" - ], - "image": "quay.io/openebs/m-exporter:@target_version@" - } - ], - "tolerations": [ - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/notReady", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.alpha.kubernetes.io/unreachable", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/not-ready", - "operator": "Exists", - "tolerationSeconds": 0 - }, - { - "effect": "NoExecute", - "key": "node.kubernetes.io/unreachable", - "operator": "Exists", - "tolerationSeconds": 0 - } - ] - } - } - } -} \ No newline at end of file diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-target-svc-patch.tpl.json b/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-target-svc-patch.tpl.json deleted file mode 100644 index 7d06ecaa7d..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva-target-svc-patch.tpl.json +++ /dev/null @@ -1,43 +0,0 @@ -{ - "metadata": { - "annotations": { - "openebs.io/storage-class-ref": "name: @sc_name@\nresourceVersion: @sc_resource_version@\n" - }, - "labels": { - "openebs.io/cas-type": "jiva", - "openebs.io/storage-engine-type": "jiva", - "openebs.io/controller-service": "jiva-controller-svc", - "openebs.io/persistent-volume": "@pv-name@", - "openebs.io/persistent-volume-claim": "@pvc-name@", - "openebs.io/version": "@target_version@" - } - }, - "spec": { - "selector": { - "openebs.io/controller": "jiva-controller", - "openebs.io/persistent-volume": "@pv-name@", - "vsm": "deprecated" - }, - "ports": [ - { - "name": "iscsi", - "port": 3260, - "protocol": "TCP", - "targetPort": 3260 - }, - { - "name": "api", - "port": 9501, - "protocol": "TCP", - "targetPort": 9501 - }, - { - "name": "exporter", - "port": 9500, - "protocol": "TCP", - "targetPort": 9500 - } - ] - } - } - \ No newline at end of file diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva_volume_upgrade.sh b/k8s/upgrades/jiva-0.6.0-0.8.1/jiva_volume_upgrade.sh deleted file mode 100755 index d1da5c1d41..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/jiva_volume_upgrade.sh +++ /dev/null @@ -1,259 +0,0 @@ -#!/usr/bin/env bash -set -x -################################################################ -# STEP: Get Persistent Volume (PV) name as argument # -# # -# NOTES: Obtain the pv to upgrade via "kubectl get pv" # -################################################################ - -target_upgrade_version="0.8.1" -current_version="0.6.0" - -function usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Get the PV name using: kubectl get pv" - exit 1 -} - -function setDeploymentRecreateStrategy() { - dns=$1 # deployment namespace - dn=$2 # deployment name - currStrategy=`kubectl get deploy -n $dns $dn -o jsonpath="{.spec.strategy.type}"` - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to get the deployment stratergy for $dn | Exit code: $rc"; exit; fi - - if [ $currStrategy != "Recreate" ]; then - kubectl patch deployment --namespace $dns --type json $dn -p "$(cat patch-strategy-recreate.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $dn | Exit code: $rc"; exit; fi - echo "Deployment upgrade strategy set as recreate" - else - echo "Deployment upgrade strategy was already set as recreate" - fi -} - -if [ "$#" -ne 1 ]; then - usage -fi - -pv=$1 -replica_node_label="openebs-jiva" - -# Check if pv exists -kubectl get pv $pv &>/dev/null;check_pv=$? -if [ $check_pv -ne 0 ]; then - echo "$pv not found";exit 1; -fi - -ns=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.namespace}"` -sc_name=`kubectl get pv $pv -o jsonpath="{.spec.storageClassName}"` -sc_res_ver=`kubectl get sc $sc_name -n $ns -o jsonpath="{.metadata.resourceVersion}"` -pv_capacity=`kubectl get pv $pv -o jsonpath="{.spec.capacity.storage}"` -pvc_name=`kubectl get pv $pv -o jsonpath="{.spec.claimRef.name}"` - -################################################################# -# STEP: Generate deploy, replicaset and container names from PV # -# # -# NOTES: Ex: If PV="pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc" # -# # -# ctrl-dep: pvc-cec8e86d-0bcc-11e8-be1c-000c298ff5fc-ctrl # -################################################################# - -c_dep=$(kubectl get deploy -n $ns -l vsm=$pv,openebs/controller=jiva-controller -o jsonpath="{.items[*].metadata.name}") -r_dep=$(kubectl get deploy -n $ns -l vsm=$pv,openebs/replica=jiva-replica -o jsonpath="{.items[*].metadata.name}") -c_svc=$(kubectl get svc -n $ns -l vsm=$pv -o jsonpath="{.items[*].metadata.name}") -c_name=$(kubectl get deploy -n $ns $c_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "ctrl-con") -r_name=$(kubectl get deploy -n $ns $r_dep -o jsonpath="{range .spec.template.spec.containers[*]}{@.name}{'\n'}{end}" | grep "rep-con") - -# Fetch the older target and replica - ReplicaSet objects which need to be -# deleted before upgrading. If not deleted, the new pods will be stuck in -# creating state - due to affinity rules. - -c_rs=$(kubectl get rs -o name --namespace $ns -l vsm=$pv,openebs/controller=jiva-controller | cut -d '/' -f 2) -r_rs=$(kubectl get rs -o name --namespace $ns -l vsm=$pv,openebs/replica=jiva-replica | cut -d '/' -f 2) - -################################################################ -# STEP: Update patch files with appropriate resource names # -# # -# NOTES: Placeholder for resourcename in the patch files are # -# replaced with respective values derived from the PV in the # -# previous step # -################################################################ - -# Check if openebs resources exist and provisioned version is 0.8 - -if [[ -z $c_rs ]]; then - echo "Target Replica set not found"; exit 1; -fi - -if [[ -z $r_rs ]]; then - echo "Replica Replica set not found"; exit 1; -fi - -if [[ -z $c_dep ]]; then - echo "Target deployment not found"; exit 1; -fi - -if [[ -z $r_dep ]]; then - echo "Replica deployment not found"; exit 1; -fi - -if [[ -z $c_svc ]]; then - echo "Target service not found"; exit 1; -fi - -if [[ -z $r_name ]]; then - echo "Replica container not found"; exit 1; -fi - -if [[ -z $c_name ]]; then - echo "Target container not found"; exit 1; -fi - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" != "" ]] && [[ "$controller_version" == "$target_upgrade_version" ]]; then - echo "Current Target deployment $c_dep version is not $current_version or $target_upgrade_version";exit 1; -fi -replica_version=`kubectl get deployment $r_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$replica_version" != "" ]] && [[ "$replica_version" == "$target_upgrade_version" ]]; then - echo "Current Replica deployment $r_dep version is not $current_version or $target_upgrade_version";exit 1; -fi -controller_svc_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_svc_version" != "" ]] && [[ "$controller_svc_version" == "$target_upgrade_version" ]] ; then - echo "Current Target service $c_svc version is not $current_version or $target_upgrade_version";exit 1; -fi - -# Get the number of replicas configured. -# This field is currently not used, but can add additional validations -# based on the nodes and expected number of replicas -rep_count=`kubectl get deploy $r_dep --namespace $ns -o jsonpath="{.spec.replicas}"` - -# Get the list of nodes where replica pods are running, delimited by ':' -rep_nodenames=`kubectl get pods -n $ns \ - -l "vsm=$pv" -l "openebs/replica=jiva-replica" \ - -o jsonpath="{range .items[*]}{@.spec.nodeName}:{end}"` - -echo "Checking if the node with replica pod has been labeled with $replica_node_label" -for rep_node in `echo $rep_nodenames | tr ":" " "`; do - nl="";nl=`kubectl get nodes $rep_node -o jsonpath="{.metadata.labels.openebs-pv-$pv}"` - echo "Labeling $rep_node"; - kubectl label node $rep_node "openebs-pv-${pv}=$replica_node_label" --overwrite -done - -sed "s/@sc_name@/$sc_name/g" jiva-replica-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@capacity@/$pv_capacity/g" | sed "s/@replica_node_label@/$replica_node_label/g" | sed "s/@r_name@/$r_name/g" | sed "s/@pv-name@/$pv/g" | sed "s/@target_version@/$target_upgrade_version/g" | sed "s/@pvc-name@/$pvc_name/g" > jiva-replica-patch.json -sed "s/@sc_name@/$sc_name/g" jiva-target-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@c_name@/$c_name/g" | sed "s/@target_version@/$target_upgrade_version/g" | sed "s/@pvc-name@/$pvc_name/g" | sed "s/@pv-name@/$pv/g" | sed "s/@rep_count@/$rep_count/g" > jiva-target-patch.json -sed "s/@sc_name@/$sc_name/g" jiva-target-svc-patch.tpl.json | sed "s/@sc_resource_version@/$sc_res_ver/g" | sed "s/@target_version@/$target_upgrade_version/g" | sed "s/@pvc-name@/$pvc_name/g" | sed "s/@pv-name@/$pv/g" > jiva-target-svc-patch.json - -################################################################################# -# STEP: Patch OpenEBS volume deployments (jiva-target, jiva-replica & jiva-svc) # -################################################################################# - -#### PATCH JIVA REPLICA DEPLOYMENT #### -if [[ $replica_version=="" ]] || [[ "$replica_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Replica Deployment to $target_upgrade_version" - - # Setting the update stratergy to recreate - setDeploymentRecreateStrategy $ns $r_dep - - echo "Patching Replica deployment to $target_upgrade_version" - - kubectl patch deployment --namespace $ns $r_dep -p "$(cat jiva-replica-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the deployment $r_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $r_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete ReplicaSet $r_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " RollOut for $r_dep failed | Exit code: $rc"; exit; fi -else - echo "Replica Deployment $r_dep is already at $target_upgrade_version" -fi - -#### PATCH TARGET DEPLOYMENT #### -if [[ $controller_version=="" ]] || [[ "$controller_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Deployment to $target_upgrade_version" - - # Setting the update stratergy to recreate - setDeploymentRecreateStrategy $ns $c_dep - - echo "Patching target deployment to 0.8.1" - kubectl patch deployment --namespace $ns $c_dep -p "$(cat jiva-target-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_dep | Exit code: $rc"; exit; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " Failed to patch the deployment | Exit code: $rc"; exit; fi -else - echo "Controller Deployment $c_dep is already at $target_upgrade_version" - -fi - -#### PATCH TARGET SERVICE #### -if [[ $controller_svc_version=="" ]] || [[ "$controller_svc_version" != "$target_upgrade_version" ]]; then - echo "Upgrading Target Service to $target_upgrade_version" - # Patching target service to 0.8.1 - kubectl patch service --namespace $ns $c_svc -p "$(cat jiva-target-svc-patch.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch the service $svc | Exit code: $rc"; exit; fi -else - echo "Controller service $c_svc is already at $target_upgrade_version" -fi - -# Annotating pv -kubectl annotate pv $pv openebs.io/cas-type=jiva - -controller_version=`kubectl get deployment $c_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_version" == "$target_upgrade_version" ]]; then - echo "Remove deprecated labels from Controller Deployment" - c_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/controller=jiva-controller | cut -d '/' -f 2) - - kubectl patch deployment --namespace $ns $c_dep --type json -p "$(cat target-patch-remove-labels.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - - kubectl delete rs $c_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to patch deployment $c_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$c_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " Failed to patch the deployment | Exit code: $rc"; exit; fi -fi - -replica_version=`kubectl get deployment $r_dep -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$replica_version" == "$target_upgrade_version" ]]; then - echo "Remove deprecated labels from Replica Deployment" - r_rs=$(kubectl get rs -o name --namespace $ns -l openebs.io/persistent-volume=$pv,openebs.io/replica=jiva-replica | cut -d '/' -f 2) - - - kubectl patch deployment --namespace $ns $r_dep --type json -p "$(cat replica-patch-remove-labels.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - - kubectl delete rs $r_rs --namespace $ns - rc=$?; if [ $rc -ne 0 ]; then echo "Failed to delete ReplicaSet $r_rs | Exit code: $rc"; exit; fi - - rollout_status=$(kubectl rollout status --namespace $ns deployment/$r_dep) - rc=$?; if [[ ($rc -ne 0) || !($rollout_status =~ "successfully rolled out") ]]; - then echo " RollOut for $r_dep failed | Exit code: $rc"; exit; fi -fi - -controller_svc_version=`kubectl get svc $c_svc -n $ns -o jsonpath='{.metadata.labels.openebs\.io/version}'` -if [[ "$controller_svc_version" == "$target_upgrade_version" ]] ; then - echo "Remove deprecated labels from Replica Deployment" - kubectl patch service --namespace $ns $c_svc --type json -p "$(cat target-svc-patch-remove-labels.json)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - kubectl label svc --namespace $ns $c_svc "vsm-" - kubectl label svc --namespace $ns $c_svc "openebs/controller-service-" -fi - -echo "Clearing temporary files" -rm jiva-replica-patch.json -rm jiva-target-patch.json -rm jiva-target-svc-patch.json - -echo "Successfully upgraded $pv to $target_upgrade_version Please run your application checks." -exit 0 - diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/patch-strategy-recreate.json b/k8s/upgrades/jiva-0.6.0-0.8.1/patch-strategy-recreate.json deleted file mode 100644 index 8c6c5c60af..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/patch-strategy-recreate.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/strategy/rollingUpdate" }, - { "op": "replace", "path": "/spec/strategy/type", "value": "Recreate" } -] diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/pre_upgrade.sh b/k8s/upgrades/jiva-0.6.0-0.8.1/pre_upgrade.sh deleted file mode 100755 index 37977c4ef5..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/pre_upgrade.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/usr/bin/env bash - -################################################################ -# STEP: Verify if upgrade needs to be performed # -# Check the version of OpenEBS installed # -# Check if default jiva storage pool or storage class can # -# conflict with the installed storage pool or class # -# Check if there are any PVs that need to be upgraded # -# # -################################################################ - -function print_usage() { - echo - echo "Usage:" - echo - echo "$0 " - echo - echo " Namespace where openebs control" - echo " plane pods like maya-apiserver are installed. " - exit 1 -} - -if [ "$#" -ne 1 ]; then - print_usage -fi - - -oens=$1 - - -echo -VERSION_INSTALLED=`kubectl get deploy -n $oens -o yaml \ - | grep m-apiserver | grep image: \ - | awk -F ':' '{print $3}'` - - -echo "Installed Version: $VERSION_INSTALLED" -if [ -z $VERSION_INSTALLED ] || [ $VERSION_INSTALLED = "0*" ]; then - echo "Unable to determine installed openebs version" - print_usage -elif test `echo $VERSION_INSTALLED | grep -c 0.6.` -eq 0; then - echo "Upgrade is supported only from 0.6.0" - exit 1 -fi - - -echo -kubectl get sp default 2>/dev/null -rc=$? -if [ $rc -eq 0 ]; then - POOL_PATH=`kubectl get sp default -o jsonpath='{.spec.path}'` - if [ $POOL_PATH = "/var/openebs" ]; then - echo "Found Jiva StoragePool named 'default' with path as /var/openebs" - else - echo "Found Jiva StoragePool named 'default' with cutomized path" - echo " After completing upgrade process, you will need to re-apply your StoragePool" - echo " or consider renaming the pool." - exit 1 - fi -else - echo "Jiva StoragePool named 'default' was not found" -fi - -echo -OLDER_PVS=`kubectl get pods --all-namespaces -l openebs/controller | wc -l` -if [ -z $OLDER_PVS ] || [ $OLDER_PVS -lt 2 ]; then - echo "There are no PVs that need to be upgraded to 0.8.1" -else - echo "Found PVs that need to be upgraded to 0.8.1" -fi - -echo -exit 0 - - diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/replica-patch-remove-labels.json b/k8s/upgrades/jiva-0.6.0-0.8.1/replica-patch-remove-labels.json deleted file mode 100644 index 77b367d9d2..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/replica-patch-remove-labels.json +++ /dev/null @@ -1,8 +0,0 @@ -[ - { "op": "remove", "path": "/metadata/labels/openebs~1replica" }, - { "op": "remove", "path": "/metadata/labels/vsm" }, - { "op": "remove", "path": "/spec/selector/matchLabels/openebs~1replica" }, - { "op": "remove", "path": "/spec/selector/matchLabels/vsm" }, - { "op": "remove", "path": "/spec/template/metadata/labels/openebs~1replica" }, - { "op": "remove", "path": "/spec/template/metadata/labels/vsm" } -] diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/sc.patch.tpl.yaml b/k8s/upgrades/jiva-0.6.0-0.8.1/sc.patch.tpl.yaml deleted file mode 100644 index 853767082b..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/sc.patch.tpl.yaml +++ /dev/null @@ -1,14 +0,0 @@ -metadata: - labels: - openebs.io/cas-type: jiva - annotations: - openebs.io/cas-type: jiva - cas.openebs.io/config: | - - name: VolumeMonitor - enabled: "@volume-monitoring" - - name: ReplicaCount - value: "@jiva-replica-count" - - name: StoragePool - value: "@storage-pool" - - name: FSType - value: "@fstype" diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/target-patch-remove-labels.json b/k8s/upgrades/jiva-0.6.0-0.8.1/target-patch-remove-labels.json deleted file mode 100644 index e563876853..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/target-patch-remove-labels.json +++ /dev/null @@ -1,8 +0,0 @@ -[ - { "op": "remove", "path": "/metadata/labels/openebs~1controller" }, - { "op": "remove", "path": "/metadata/labels/vsm" }, - { "op": "remove", "path": "/spec/selector/matchLabels/openebs~1controller" }, - { "op": "remove", "path": "/spec/selector/matchLabels/vsm" }, - { "op": "remove", "path": "/spec/template/metadata/labels/openebs~1controller" }, - { "op": "remove", "path": "/spec/template/metadata/labels/vsm" } -] diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/target-svc-patch-remove-labels.json b/k8s/upgrades/jiva-0.6.0-0.8.1/target-svc-patch-remove-labels.json deleted file mode 100644 index 04dfbf663f..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/target-svc-patch-remove-labels.json +++ /dev/null @@ -1,4 +0,0 @@ -[ - { "op": "remove", "path": "/spec/selector/openebs~1controller" }, - { "op": "remove", "path": "/spec/selector/vsm" } -] diff --git a/k8s/upgrades/jiva-0.6.0-0.8.1/upgrade_sc.sh b/k8s/upgrades/jiva-0.6.0-0.8.1/upgrade_sc.sh deleted file mode 100755 index f97fe49c94..0000000000 --- a/k8s/upgrades/jiva-0.6.0-0.8.1/upgrade_sc.sh +++ /dev/null @@ -1,69 +0,0 @@ -#!/usr/bin/env bash - -############################################################################### -# STEP: Get Storage Classes # -############################################################################### - -# Get the list of storageclasses, delimited by ':' -sc_list=`kubectl get sc \ - -o jsonpath="{range .items[*]}{@.metadata.name}:{end}"` -rc=$?; -if [ $rc -ne 0 ]; -then - echo "ERROR: $rc"; - echo "Please ensure `kubectl` is installed and can access your cluster."; - exit; -fi - -echo "Check if openebs storage class parameters are moved to config annotation" -for sc in `echo $sc_list | tr ":" " "`; do - pt="";pt=`kubectl get sc $sc -o jsonpath="{.provisioner}"` - if [ "openebs.io/provisioner-iscsi" == "$pt" ]; - then - uc="";uc=`kubectl get sc $sc -o jsonpath="{.metadata.labels.openebs\.io/cas-type}"` - if [ ! -z $uc ]; then - echo "SC $sc already upgraded"; - continue - fi - - echo "Upgrading SC $sc"; - - replicas=`kubectl get sc $sc -o jsonpath="{.parameters.openebs\.io/jiva-replica-count}"` - pool=`kubectl get sc $sc -o jsonpath="{.parameters.openebs\.io/storage-pool}"` - monitoring=`kubectl get sc $sc -o jsonpath="{.parameters.openebs\.io/volume-monitor}"` - fstype=`kubectl get sc $sc -o jsonpath="{.parameters.openebs\.io/fstype}"` - - if [ -z $replicas ]; then replicas="3"; fi - sed "s/@jiva-replica-count[^ \"]*/$replicas/g" sc.patch.tpl.yaml > sc.patch.tpl.yaml.0 - - if [ -z $pool ]; then pool="default"; fi - sed "s/@storage-pool[^ \"]*/$pool/g" sc.patch.tpl.yaml.0 > sc.patch.tpl.yaml.1 - - if [ -z $monitoring ]; then monitoring="true"; fi - sed "s/@volume-monitor[^ \"]*/$monitoring/g" sc.patch.tpl.yaml.1 > sc.patch.tpl.yaml.2 - - if [ -z $fstype ]; then fstype="ext4"; fi - sed "s/@fstype[^ \"]*/$fstype/g" sc.patch.tpl.yaml.2 > sc.patch.yaml - - echo " openebs.io/jiva-replica-count -> ReplicaCount : $replicas" - echo " openebs.io/storage-pool -> StoragePool : $pool" - echo " openebs.io/volume-monitor -> VolumeMonitor : $monitoring" - echo " openebs.io/fstype -> FSType : $fstype" - - kubectl patch sc $sc -p "$(cat sc.patch.yaml)" - rc=$?; if [ $rc -ne 0 ]; then echo "ERROR: $rc"; exit; fi - - rm -rf sc.patch.tpl.yaml.0 - rm -rf sc.patch.tpl.yaml.1 - rm -rf sc.patch.tpl.yaml.2 - rm -rf sc.patch.yaml - - #TODO - # Check if SC has other parameters and warn the user about patching them manually. - # or contact openebs dev. - - echo "Successfully upgraded $sc to 0.8.1" - fi -done - - diff --git a/k8s/vagrant/1.10.0/centos/Vagrantfile b/k8s/vagrant/1.10.0/centos/Vagrantfile deleted file mode 100644 index 13a8926296..0000000000 --- a/k8s/vagrant/1.10.0/centos/Vagrantfile +++ /dev/null @@ -1,295 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -# - Launch VMs auto-configured with kubernetes cluster with dedicated openebs -# - Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -# - Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -# - Specify the number of VMs / node types -# - Specify the CPU/RAM for each type of node -# - Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -# - Specify the base operating system - Ubuntu, CentOS, etc., -# - Specify the kubernetes pod network - flannel, weave, calico, etc,. -# - In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -distro=ENV['DISTRIBUTION'] || "ubuntu" -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 2048 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ip addr show | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) - -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem, distro) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "vagrant" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "vagrant:vagrant" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.10.0-centos" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup CNI - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cni.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - - # Fix for Issue : https://github.com/kubernetes/kubernetes/issues/57870 - vmCfg.trigger.before :halt do - @machine.communicate.sudo('grep -q "listen-peer-urls" /etc/kubernetes/manifests/etcd.yaml; if [ $? -ne 0 ]; then sed -i "s/ - --data-dir=\/var\/lib\/etcd/ - --data-dir=\/var\/lib\/etcd\n - --listen-peer-urls=http:\/\/127.0.0.1:2380/g" /etc/kubernetes/manifests/etcd.yaml; fi') - end - - vmCfg.trigger.before :suspend do - @machine.communicate.sudo('grep -q "listen-peer-urls" /etc/kubernetes/manifests/etcd.yaml; if [ $? -ne 0 ]; then sed -i "s/ - --data-dir=\/var\/lib\/etcd/ - --data-dir=\/var\/lib\/etcd\n - --listen-peer-urls=http:\/\/127.0.0.1:2380/g" /etc/kubernetes/manifests/etcd.yaml; fi') - end - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.10.0-centos" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ip addr show | grep -oP \ - "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_token_sha = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ - | openssl rsa -pubin -outform der 2>/dev/null \ - | openssl dgst -sha256 -hex \ - | sed "s/^.* //"') - - token_sha = `#{get_token_sha}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - info"Using Discovery Token SHA - #{token_sha.strip}" - - @machine.communicate.sudo("bash \ - /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --token-sha=#{token_sha.strip}") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end -end diff --git a/k8s/vagrant/1.10.0/ubuntu/Vagrantfile b/k8s/vagrant/1.10.0/ubuntu/Vagrantfile deleted file mode 100644 index 0aeb3d8dcc..0000000000 --- a/k8s/vagrant/1.10.0/ubuntu/Vagrantfile +++ /dev/null @@ -1,286 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -# - Launch VMs auto-configured with kubernetes cluster with dedicated openebs -# - Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -# - Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -# - Specify the number of VMs / node types -# - Specify the CPU/RAM for each type of node -# - Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -# - Specify the base operating system - Ubuntu, CentOS, etc., -# - Specify the kubernetes pod network - flannel, weave, calico, etc,. -# - In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -distro=ENV['DISTRIBUTION'] || "ubuntu" -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 3 -KH_MEM = ENV['KH_MEM'] || 2048 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ip addr show | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) - -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem, distro) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if (input == "n") - Kernel.exit!(0) - elsif (input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.10.0-ubuntu" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cni.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.10.0-ubuntu" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ip addr show | grep -oP \ - "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_token_sha = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ - | openssl rsa -pubin -outform der 2>/dev/null \ - | openssl dgst -sha256 -hex \ - | sed "s/^.* //"') - - token_sha = `#{get_token_sha}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - info"Using Discovery Token SHA - #{token_sha.strip}" - - @machine.communicate.sudo("bash \ - /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --token-sha=#{token_sha.strip}") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end -end diff --git a/k8s/vagrant/1.6/Vagrantfile b/k8s/vagrant/1.6/Vagrantfile deleted file mode 100644 index f3962df681..0000000000 --- a/k8s/vagrant/1.6/Vagrantfile +++ /dev/null @@ -1,405 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -# - Launch VMs auto-configured with kubernetes cluster with dedicated openebs -# - Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -# - Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -# - Specify the number of VMs / node types -# - Specify the CPU/RAM for each type of node -# - Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -# - Specify the base operating system - Ubuntu, CentOS, etc., -# - Specify the kubernetes pod network - flannel, weave, calico, etc,. -# - In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 1024 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.6" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_weave.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.6" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_cluster_ip = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'kubectl get svc -o yaml | grep clusterIP \ - | cut -d ":" -f2 | cut -d " " -f2') - - cluster_ip = `#{get_cluster_ip}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - - @machine.communicate.sudo("bash \ - /home/ubuntu/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --clusterip=#{cluster_ip.strip}") - - @machine.communicate.sudo("sudo systemctl daemon-reload") - @machine.communicate.sudo("sudo systemctl restart kubelet") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - - # Maya Master related only !! - 1.upto(MM_NODES.to_i) do |i| - hostname = "omm-%02d" % [i] - cpus = MM_CPUS - mem = MM_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Install OpenEBS Maya Master - if MAYA_RELEASE_TAG == "" - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - privileged: true - else - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - :args => "--releasetag=#{MAYA_RELEASE_TAG}", - privileged: true - end - - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - end - end - end - end - end - end - - # Maya Host related only !! - 1.upto(MH_NODES.to_i) do |i| - hostname = "osh-%02d" % [i] - cpus = MH_CPUS - mem = MH_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP to join the cluster..." - master_hostname = "omm-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The OpenEBS Maya Master is down, \ - bring it up and manually run: \ - configure_osh.sh script on OpenEBS Storage Host." - else - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - if MAYA_RELEASE_TAG == "" - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip}") - else - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip} \ - --releasetag=#{MAYA_RELEASE_TAG}") - end - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - - info"Fetching the latest jiva image" - - @machine.communicate.sudo("docker pull \ - openebs/jiva") - end - end - end - end - end - end - end -end diff --git a/k8s/vagrant/1.7.5/Vagrantfile b/k8s/vagrant/1.7.5/Vagrantfile deleted file mode 100644 index 42f3679b5b..0000000000 --- a/k8s/vagrant/1.7.5/Vagrantfile +++ /dev/null @@ -1,412 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -#- Launch VMs auto-configured with kubernetes cluster with dedicated openebs -#- Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -#- Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -#- Specify the number of VMs / node types -#- Specify the CPU/RAM for each type of node -#- Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -#- Specify the base operating system - Ubuntu, CentOS, etc., -#- Specify the kubernetes pod network - flannel, weave, calico, etc,. -#- In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 1024 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.7.5" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_weave.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.7.5" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_cluster_ip = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'kubectl get svc -o yaml | grep clusterIP \ - | cut -d ":" -f2 | cut -d " " -f2') - - cluster_ip = `#{get_cluster_ip}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - - @machine.communicate.sudo("bash \ - /home/ubuntu/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --clusterip=#{cluster_ip.strip}") - - info"Fetching the latest jiva image" - @machine.communicate.sudo("docker pull openebs/jiva:0.3-RC2") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - - # Maya Master related only !! - 1.upto(MM_NODES.to_i) do |i| - hostname = "omm-%02d" % [i] - cpus = MM_CPUS - mem = MM_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Install OpenEBS Maya Master - if MAYA_RELEASE_TAG == "" - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - privileged: true - else - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - :args => "--releasetag=#{MAYA_RELEASE_TAG}", - privileged: true - end - - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - end - end - end - end - end - end - - # Maya Host related only !! - 1.upto(MH_NODES.to_i) do |i| - hostname = "osh-%02d" % [i] - cpus = MH_CPUS - mem = MH_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP to join the cluster..." - master_hostname = "omm-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The OpenEBS Maya Master is down, \ - bring it up and manually run: \ - configure_osh.sh script on OpenEBS Storage Host." - else - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - if MAYA_RELEASE_TAG == "" - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip}") - else - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip} \ - --releasetag=#{MAYA_RELEASE_TAG}") - end - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - - info"Fetching the latest jiva image" - - @machine.communicate.sudo("docker pull openebs/jiva") - end - end - end - end - end - end - end - end diff --git a/k8s/vagrant/1.7.5/openebs-monitoring/openebs-exporter.yaml b/k8s/vagrant/1.7.5/openebs-monitoring/openebs-exporter.yaml deleted file mode 100644 index a18732753e..0000000000 --- a/k8s/vagrant/1.7.5/openebs-monitoring/openebs-exporter.yaml +++ /dev/null @@ -1,35 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: openebs-exporter -spec: - replicas: 1 - template: - metadata: - labels: - name: openebs-exporter - spec: - serviceAccountName: prometheus - containers: - - name: openebs-exporter - image: utkarshmani1997/openebs-exporter:test-11 - args: - # This is the flag provided to exporter at run time - # replace it with your controller's service IP - # For exp : Do kubectl get svc | grep volname-ctrl-svc - # to get the IP. - - --controller.addr=http://10.101.141.121:9501 - ports: - - containerPort: 9500 ---- -# openebs-exporter-service -apiVersion: v1 -kind: Service -metadata: - name: openebs-exporter-service -spec: - selector: # exposes any pods with the following labels as a service - name: openebs-exporter - ports: - - port: 80 # this Service's port (cluster-internal IP clusterIP) - targetPort: 9500 # pods expose this port diff --git a/k8s/vagrant/1.7.5/openebs-monitoring/prometheus.yaml b/k8s/vagrant/1.7.5/openebs-monitoring/prometheus.yaml deleted file mode 100644 index f26861139d..0000000000 --- a/k8s/vagrant/1.7.5/openebs-monitoring/prometheus.yaml +++ /dev/null @@ -1,232 +0,0 @@ -# ConfigMap is used to run prometheus with given configuration. It helps -# if we want to change the configuration from time to time because our -# requirement changes so configurations also need to change accordingly. -# You need to restart the pods to apply these changes. -# -# A scrape (Collect metrics) configuration for running Prometheus on a -# Kubernetes cluster is given below. This uses separate scrape configs -# for cluster components (i.e. API server, node) and services to allow -# each to use different authentication configs. -# -# Kubernetes labels will be added as Prometheus labels on metrics via the -# `labelmap` relabeling action. -# labelmap: Match regex against all label names. Then copy the values of -# the matching labels to label names given by replacement with match group -# references (${1}, ${2}, ...) in replacement substituted by their value. -# -# If you are using Kubernetes 1.7.2 or earlier, please take note of the comments -# for the kubernetes-cadvisor job; you will need to edit or remove this job. - -kind: ConfigMap -metadata: - name: prometheus-config -apiVersion: v1 -data: - prometheus.yml: |- - global: - scrape_interval: 5s - evaluation_interval: 5s - - # scrape config for maya-apiserver pods - # - # The relabeling allows the actual pod scrape endpoint to be configured via the - # following annotations: - # - # * `prometheus.io/scrape`: Only scrape pods that have a value of `true` - # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. - # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the - # pod's declared ports (default is a port-free target if none are declared). - scrape_configs: - - job_name: 'prometheus' - static_configs: - # Please change this IP to the IP of your node (VM) - # Because prometheus-service is running as Type NodePort - # So it's accessible outside the cluster. - - targets: ['172.28.128.12:32514'] - - job_name: 'maya-apiserver' - scheme: http - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_label_name] - regex: maya-apiserver - action: keep - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - - job_name: 'openebs-jiva-controller' - scheme: http - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_label_openebs_controller] - regex: jiva-controller - action: keep - - source_labels: [__meta_kubernetes_pod_container_port_number] - action: drop - regex: '(.*)3260' - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - - job_name: 'openebs-jiva-replica' - scheme: http - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_label_openebs_replica] - regex: jiva-replica - action: keep - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - - source_labels: [__meta_kubernetes_pod_container_port_number] - action: drop - regex: '(.*)9503' - - source_labels: [__meta_kubernetes_pod_container_port_number] - action: drop - regex: '(.*)9504' - - job_name : 'kubelets' - scheme: http - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /metrics - - source_labels: [__address__] - regex: '(.*):10250' - replacement: '${1}:10255' - target_label: __address__ - - # Scrape config for API servers. - # - # Kubernetes exposes API servers as endpoints to the default/kubernetes - # service so this uses `endpoints` role and uses relabelling to only keep - # the endpoints associated with the default/kubernetes service using the - # default named port `https`. This works for single API server deployments as - # well as HA API server deployments. - - job_name: 'kubernetes-apiservers' - kubernetes_sd_configs: - - role: endpoints - - # Default to scraping over https. If required, just disable this or change to - # `http`. - scheme: https - - # This TLS & bearer token file config is used to connect to the actual scrape - # endpoints for cluster components. This is separate to discovery auth - # configuration because discovery & scraping are two separate concerns in - # Prometheus. The discovery auth config is automatic if Prometheus runs inside - # the cluster. Otherwise, more config options have to be provided within the - # . - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - - # If your node certificates are self-signed or use a different CA to the - # master CA, then disable certificate verification below. Note that - # certificate verification is an integral part of a secure infrastructure - # so this should only be disabled in a controlled environment. You can - # disable certificate verification by uncommenting the line below. - # - # insecure_skip_verify: true - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - - # Keep only the default/kubernetes service endpoints for the https port. This - # will add targets for each API server which Kubernetes adds an endpoint to - # the default/kubernetes service. - relabel_configs: - - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: default;kubernetes;https - - # Scrape config for Kubelet cAdvisor. - # - # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics - # (those whose names begin with 'container_') have been removed from the - # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to - # retrieve those metrics. - # - # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor - # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" - # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with - # the --cadvisor-port=0 Kubelet flag). - # - # This job is not necessary and should be removed in Kubernetes 1.6 and - # earlier versions, or it will cause the metrics to be scraped twice. - - job_name: 'kubernetes-cadvisor' - - # Default to scraping over https. If required, just disable this or change to - # `http`. - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - - # Scrap config for node-exporter. - # - # This is required to scrap the node metrics such as IO's, CPU and other metrics - # related to node. By default it scrap only from worker nodes if run as deployment - # but you can monitor master node also. To monitor node you need to run it as - # daemonset and add 'toleration' field in node-exporter.yaml file.For more details - # see 'taints and toleration' in Kubernetes documentation. - - job_name: 'kubernetes-node-exporter' - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - source_labels: [__meta_kubernetes_role] - action: replace - target_label: kubernetes_role - - source_labels: [__address__] - regex: '(.*):10250' - replacement: '${1}:9100' - target_label: __address__ - - source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname] - target_label: __instance__ - - source_labels: [job] - regex: 'kubernetes-(.*)' - replacement: '${1}' - target_label: name - - job_name: 'openebs-exporter' - scheme: http - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_label_name] - regex: openebs-exporter - action: keep - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - diff --git a/k8s/vagrant/1.7/Vagrantfile b/k8s/vagrant/1.7/Vagrantfile deleted file mode 100644 index f41671e55e..0000000000 --- a/k8s/vagrant/1.7/Vagrantfile +++ /dev/null @@ -1,406 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -# - Launch VMs auto-configured with kubernetes cluster with dedicated openebs -# - Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -# - Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -# - Specify the number of VMs / node types -# - Specify the CPU/RAM for each type of node -# - Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -# - Specify the base operating system - Ubuntu, CentOS, etc., -# - Specify the kubernetes pod network - flannel, weave, calico, etc,. -# - In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 1024 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.7" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_weave.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.7" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_cluster_ip = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'kubectl get svc -o yaml | grep clusterIP \ - | cut -d ":" -f2 | cut -d " " -f2') - - cluster_ip = `#{get_cluster_ip}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - - @machine.communicate.sudo("bash \ - /home/ubuntu/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --clusterip=#{cluster_ip.strip}") - - @machine.communicate.sudo("sudo systemctl daemon-reload") - @machine.communicate.sudo("sudo systemctl restart kubelet") - - info"Fetching the latest jiva image" - @machine.communicate.sudo("docker pull openebs/jiva:0.3-RC2") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - - # Maya Master related only !! - 1.upto(MM_NODES.to_i) do |i| - hostname = "omm-%02d" % [i] - cpus = MM_CPUS - mem = MM_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Install OpenEBS Maya Master - if MAYA_RELEASE_TAG == "" - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - privileged: true - else - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - :args => "--releasetag=#{MAYA_RELEASE_TAG}", - privileged: true - end - - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - end - end - end - end - end - end - - # Maya Host related only !! - 1.upto(MH_NODES.to_i) do |i| - hostname = "osh-%02d" % [i] - cpus = MH_CPUS - mem = MH_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP to join the cluster..." - master_hostname = "omm-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The OpenEBS Maya Master is down, \ - bring it up and manually run: \ - configure_osh.sh script on OpenEBS Storage Host." - else - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - if MAYA_RELEASE_TAG == "" - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip}") - else - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip} \ - --releasetag=#{MAYA_RELEASE_TAG}") - end - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - - info"Fetching the latest jiva image" - @machine.communicate.sudo("docker pull openebs/jiva") - end - end - end - end - end - end - end -end diff --git a/k8s/vagrant/1.8.2/Vagrantfile b/k8s/vagrant/1.8.2/Vagrantfile deleted file mode 100644 index 5f3c827c03..0000000000 --- a/k8s/vagrant/1.8.2/Vagrantfile +++ /dev/null @@ -1,412 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -#- Launch VMs auto-configured with kubernetes cluster with dedicated openebs -#- Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -#- Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -#- Specify the number of VMs / node types -#- Specify the CPU/RAM for each type of node -#- Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -#- Specify the base operating system - Ubuntu, CentOS, etc., -#- Specify the kubernetes pod network - flannel, weave, calico, etc,. -#- In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 1024 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.8.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_weave.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.8.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_cluster_ip = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'kubectl get svc -o yaml | grep clusterIP \ - | cut -d ":" -f2 | cut -d " " -f2') - - cluster_ip = `#{get_cluster_ip}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - - @machine.communicate.sudo("bash \ - /home/ubuntu/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --clusterip=#{cluster_ip.strip}") - - info"Fetching the latest jiva image" - @machine.communicate.sudo("docker pull openebs/jiva:0.5.3") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - - # Maya Master related only !! - 1.upto(MM_NODES.to_i) do |i| - hostname = "omm-%02d" % [i] - cpus = MM_CPUS - mem = MM_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Install OpenEBS Maya Master - if MAYA_RELEASE_TAG == "" - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - privileged: true - else - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/ubuntu/demo/maya/scripts/configure_omm.sh", - :args => "--releasetag=#{MAYA_RELEASE_TAG}", - privileged: true - end - - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - end - end - end - end - end - end - - # Maya Host related only !! - 1.upto(MH_NODES.to_i) do |i| - hostname = "osh-%02d" % [i] - cpus = MH_CPUS - mem = MH_MEM - - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_NONE.to_i)) - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/openebs-0.2" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "openebs-openebs-0.2-cloudimg-console.log")] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem) - - # Run in dedicated deployment mode - if deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i - - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP to join the cluster..." - master_hostname = "omm-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - - if master_ip_address == "" - info"The OpenEBS Maya Master is down, \ - bring it up and manually run: \ - configure_osh.sh script on OpenEBS Storage Host." - else - get_ip_address = %Q(vagrant ssh \ - #{hostname} -c 'ifconfig | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - host_ip_address = `#{get_ip_address}` - - if MAYA_RELEASE_TAG == "" - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip}") - else - @machine.communicate.sudo("bash \ - /home/ubuntu/demo/maya/scripts/configure_osh.sh \ - --masterip=#{master_ip_address.strip} \ - --releasetag=#{MAYA_RELEASE_TAG}") - end - - @machine.communicate.sudo("echo \ - 'export NOMAD_ADDR=http://#{host_ip_address.strip}:4646' >> \ - /home/ubuntu/.profile") - - @machine.communicate.sudo("echo \ - 'export MAPI_ADDR=http://#{host_ip_address.strip}:5656' >> \ - /home/ubuntu/.profile") - - info"Fetching the latest jiva image" - - @machine.communicate.sudo("docker pull openebs/jiva") - end - end - end - end - end - end - end - end diff --git a/k8s/vagrant/1.8.8/centos/Vagrantfile b/k8s/vagrant/1.8.8/centos/Vagrantfile deleted file mode 100644 index 77a44e0c19..0000000000 --- a/k8s/vagrant/1.8.8/centos/Vagrantfile +++ /dev/null @@ -1,286 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -#This is parameterized Vagrantfile, that can used for any of the following: -#- Launch VMs auto-configured with kubernetes cluster with dedicated openebs -#- Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -#- Launch VMs for manual installation of kubernetes or maya clusters or both -# -#The configurable options include: -#- Specify the number of VMs / node types -#- Specify the CPU/RAM for each type of node -#- Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -#- Specify the base operating system - Ubuntu, CentOS, etc., -#- Specify the kubernetes pod network - flannel, weave, calico, etc,. -#- In case of dedicated, specify the storage network - host, etc., - -#Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -distro=ENV['DISTRIBUTION'] || "ubuntu" -#Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -#Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -#TODO - Verify -#LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -#so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -#Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -#Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 2048 -KH_CPUS = ENV['KH_CPUS'] || 2 - -#Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -#Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -#Local Variables -machine_ip_address = %Q(ip addr show | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) - -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem, distro) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "vagrant" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "vagrant:vagrant" | sudo chpasswd - SHELL - - #Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - #Uncomment the following line, to launch the Virtual Box console. - #Useful for debugging cases, where the VM doesn't allow login into console - #vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - #Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - #Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.8.8-centos" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cni.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.8.8-centos" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ip addr show | grep -oP \ - "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_token_sha = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ - | openssl rsa -pubin -outform der 2>/dev/null \ - | openssl dgst -sha256 -hex \ - | sed "s/^.* //"') - - token_sha = `#{get_token_sha}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - info"Using Discovery Token SHA - #{token_sha.strip}" - - @machine.communicate.sudo("bash \ - /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --token-sha=#{token_sha.strip}") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - end diff --git a/k8s/vagrant/1.8.8/ubuntu/Vagrantfile b/k8s/vagrant/1.8.8/ubuntu/Vagrantfile deleted file mode 100644 index 9e94d49983..0000000000 --- a/k8s/vagrant/1.8.8/ubuntu/Vagrantfile +++ /dev/null @@ -1,286 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -#This is parameterized Vagrantfile, that can used for any of the following: -#- Launch VMs auto-configured with kubernetes cluster with dedicated openebs -#- Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -#- Launch VMs for manual installation of kubernetes or maya clusters or both -# -#The configurable options include: -#- Specify the number of VMs / node types -#- Specify the CPU/RAM for each type of node -#- Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -#- Specify the base operating system - Ubuntu, CentOS, etc., -#- Specify the kubernetes pod network - flannel, weave, calico, etc,. -#- In case of dedicated, specify the storage network - host, etc., - -#Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -distro=ENV['DISTRIBUTION'] || "ubuntu" -#Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -#Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -#TODO - Verify -#LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -#so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -#Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -#Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 3 -KH_MEM = ENV['KH_MEM'] || 2048 -KH_CPUS = ENV['KH_CPUS'] || 2 - -#Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -#Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -#Local Variables -machine_ip_address = %Q(ip addr show | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) - -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem, distro) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - #Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - #Uncomment the following line, to launch the Virtual Box console. - #Useful for debugging cases, where the VM doesn't allow login into console - #vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - #Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - #Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.8.8-ubuntu" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup Weave - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cni.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.8.8-ubuntu" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ip addr show | grep -oP \ - "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_token_sha = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ - | openssl rsa -pubin -outform der 2>/dev/null \ - | openssl dgst -sha256 -hex \ - | sed "s/^.* //"') - - token_sha = `#{get_token_sha}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - info"Using Discovery Token SHA - #{token_sha.strip}" - - @machine.communicate.sudo("bash \ - /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --token-sha=#{token_sha.strip}") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - end diff --git a/k8s/vagrant/1.9.4/centos/Vagrantfile b/k8s/vagrant/1.9.4/centos/Vagrantfile deleted file mode 100644 index bf8c4e6a0a..0000000000 --- a/k8s/vagrant/1.9.4/centos/Vagrantfile +++ /dev/null @@ -1,297 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -# - Launch VMs auto-configured with kubernetes cluster with dedicated openebs -# - Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -# - Launch VMs for manual installation of kubernetes or maya clusters or both - -# The configurable options include: -# - Specify the number of VMs / node types -# - Specify the CPU/RAM for each type of node -# - Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -# - Specify the base operating system - Ubuntu, CentOS, etc., -# - Specify the kubernetes pod network - flannel, weave, calico, etc,. -# - In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -distro=ENV['DISTRIBUTION'] || "ubuntu" -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 2048 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ip addr show | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) - -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem, distro) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "vagrant" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "vagrant:vagrant" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if (input == "n") - Kernel.exit!(0) - elsif (input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.9.4-centos" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup CNI - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cni.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - - # Fix for Issue : https://github.com/kubernetes/kubernetes/issues/57870 - vmCfg.trigger.before :halt do - @machine.communicate.sudo('grep -q "listen-peer-urls" /etc/kubernetes/manifests/etcd.yaml; if [ $? -ne 0 ]; then sed -i "s/ - --data-dir=\/var\/lib\/etcd/ - --data-dir=\/var\/lib\/etcd\n - --listen-peer-urls=http:\/\/127.0.0.1:2380/g" /etc/kubernetes/manifests/etcd.yaml; fi') - end - - # Fix for Issue : https://github.com/kubernetes/kubernetes/issues/57870 - vmCfg.trigger.before :suspend do - @machine.communicate.sudo('grep -q "listen-peer-urls" /etc/kubernetes/manifests/etcd.yaml; if [ $? -ne 0 ]; then sed -i "s/ - --data-dir=\/var\/lib\/etcd/ - --data-dir=\/var\/lib\/etcd\n - --listen-peer-urls=http:\/\/127.0.0.1:2380/g" /etc/kubernetes/manifests/etcd.yaml; fi') - end - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.9.4-centos" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ip addr show | grep -oP \ - "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_token_sha = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ - | openssl rsa -pubin -outform der 2>/dev/null \ - | openssl dgst -sha256 -hex \ - | sed "s/^.* //"') - - token_sha = `#{get_token_sha}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - info"Using Discovery Token SHA - #{token_sha.strip}" - - @machine.communicate.sudo("bash \ - /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --token-sha=#{token_sha.strip}") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end -end diff --git a/k8s/vagrant/1.9.4/ubuntu/Vagrantfile b/k8s/vagrant/1.9.4/ubuntu/Vagrantfile deleted file mode 100644 index 48fa67eaca..0000000000 --- a/k8s/vagrant/1.9.4/ubuntu/Vagrantfile +++ /dev/null @@ -1,286 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# This is parameterized Vagrantfile, that can used for any of the following: -#- Launch VMs auto-configured with kubernetes cluster with dedicated openebs -#- Launch VMs auto-configured with kubernetes cluster with hyperconverged openebs -#- Launch VMs for manual installation of kubernetes or maya clusters or both -# -# The configurable options include: -#- Specify the number of VMs / node types -#- Specify the CPU/RAM for each type of node -#- Specify the desired kubernetes cluster installation option - kubeadm, kargo, manual -#- Specify the base operating system - Ubuntu, CentOS, etc., -#- Specify the kubernetes pod network - flannel, weave, calico, etc,. -#- In case of dedicated, specify the storage network - host, etc., - -# Specify the OpenEBS Deployment Mode - dedicated=1 (default) or hyperconverged=2 -DEPLOY_MODE_NONE = 0 -DEPLOY_MODE_DEDICATED = 1 -DEPLOY_MODE_HC = 2 - -distro=ENV['DISTRIBUTION'] || "ubuntu" -# Changed DEPLOY_MODE from Constant to Local variable deploy_Mode to avoid rewrite warnings on constants. -deploy_Mode=ENV['OPENEBS_DEPLOY_MODE'] || 2 - -# Specify the release versions to be installed -MAYA_RELEASE_TAG = ENV['MAYA_RELEASE_TAG'] || "0.2" - -# TODO - Verify -# LC_ALL is not set on OsX, it will inherit it from SSH on Linux though, -# so might want it to be a conditional -ENV['LC_ALL']="en_US.UTF-8" - -# Specify the number of Kubernetes Master nodes, CPU/RAM per node. -KM_NODES = ENV['KM_NODES'] || 1 -KM_MEM = ENV['KM_MEM'] || 2048 -KM_CPUS = ENV['KM_CPUS'] || 2 - -# Specify the number of Kubernetes Minion nodes, CPU/RAM per node. -KH_NODES = ENV['KH_NODES'] || 2 -KH_MEM = ENV['KH_MEM'] || 2048 -KH_CPUS = ENV['KH_CPUS'] || 2 - -# Specify the number of OpenEBS Master nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MM_NODES = ENV['MM_NODES'] || 1 -MM_MEM = ENV['MM_MEM'] || 1024 -MM_CPUS = ENV['MM_CPUS'] || 1 - -# Specify the number of OpenEBS Storage Host nodes, CPU/RAM per node. (Applicable only for dedicated mode) -MH_NODES = ENV['MH_NODES'] || 2 -MH_MEM = ENV['MH_MEM'] || 1024 -MH_CPUS = ENV['MH_CPUS'] || 1 - -# Don't touch below this line, unless you know what you're doing! - -# Vagrantfile API/syntax version. -VAGRANTFILE_API_VERSION = "2" -Vagrant.require_version ">= 1.9.1" - -# Local Variables -machine_ip_address = %Q(ip addr show | grep -oP \ - "inet addr:\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP "\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1) - -master_ip_address = "" -host_ip_address = "" -get_token_name = "" -token_name = "" -get_token = "" -token = "" - -required_plugins = %w(vagrant-cachier vagrant-triggers) - -required_plugins.each do |plugin| - need_restart = false - unless Vagrant.has_plugin? plugin - system "vagrant plugin install #{plugin}" - need_restart = true - end - exec "vagrant #{ARGV.join(' ')}" if need_restart -end - - -def configureVM(vmCfg, hostname, cpus, mem, distro) - - # Default timeout is 300 sec for boot. - # Uncomment the following line and set the desired timeout value. - # vmCfg.vm.boot_timeout = 300 - vmCfg.vm.hostname = hostname - vmCfg.vm.network "private_network", type: "dhcp" - - # Needed for ubuntu/xenial64 packaged boxes - # The default user is ubuntu for those boxes - vmCfg.ssh.username = "ubuntu" - vmCfg.vm.provision "shell", inline: <<-SHELL - echo "ubuntu:ubuntu" | sudo chpasswd - SHELL - - # Adding Vagrant-cachier - if Vagrant.has_plugin?("vagrant-cachier") - vmCfg.cache.scope = :machine - vmCfg.cache.enable :apt - vmCfg.cache.enable :gem - end - - # Set resources w.r.t Virtualbox provider - vmCfg.vm.provider "virtualbox" do |vb| - # Uncomment the following line, to launch the Virtual Box console. - # Useful for debugging cases, where the VM doesn't allow login into console - # vb.gui = true - vb.memory = mem - vb.cpus = cpus - vb.customize ["modifyvm", :id, "--cableconnected1", "on"] - end - - return vmCfg -end - -# Entry point of this Vagrantfile -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # Don't look for updates - if Vagrant.has_plugin?("vagrant-vbguest") then - config.vbguest.auto_update = false - end - - # Check for invalid deployment modes and default to DEDICATED MODE. - if ((deploy_Mode.to_i < DEPLOY_MODE_NONE.to_i) || \ - (deploy_Mode.to_i > DEPLOY_MODE_HC.to_i)) - puts "Invalid value set for OPENEBS_DEPLOY_MODE" - puts "Usage: OPENEBS_DEPLOY_MODE=0 for NONE" - puts "Usage: OPENEBS_DEPLOY_MODE=1 for DEDICATED" - puts "Usage: OPENEBS_DEPLOY_MODE=2 for HYPERCONVERGED" - puts "Defaulting to DEDICATED MODE..." - puts "Do you want to continue?(y/n):" - input = STDIN.gets.chomp - while 1 do - if(input == "n") - Kernel.exit!(0) - elsif(input == "y") - break - else - puts "Invalid input: type 'y' or 'n'" - input = STDIN.gets.chomp - end - end - deploy_Mode = 1 - end - - # K8s Master related only !! - 1.upto(KM_NODES.to_i) do |i| - hostname = "kubemaster-%02d" % [i] - cpus = KM_CPUS - mem = KM_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.9.4-ubuntu" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubemaster-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Install Kubernetes Master. - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_master.sh", - privileged: true - - # Setup K8s Credentials - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cred.sh", - privileged: false - - # Setup CNI - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_cni.sh", - privileged: false - - # Setup Dashboard - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_dashboard.sh", - privileged: false - end - end - end - - # K8s Minions related only !! - 1.upto(KH_NODES.to_i) do |i| - hostname = "kubeminion-%02d" % [i] - cpus = KH_CPUS - mem = KH_MEM - config.vm.define hostname do |vmCfg| - vmCfg.vm.box = "openebs/k8s-1.9.4-ubuntu" - vmCfg.vm.provider "virtualbox" do |vb| - vb.customize ["modifyvm", :id, "--uartmode1", "file", File.join(Dir.pwd, "kubeminion-%02d-console.log" % [i] )] - end - vmCfg = configureVM(vmCfg, hostname, cpus, mem, distro) - - vmCfg.vm.provision :shell, - inline: "/bin/bash /home/#{vmCfg.ssh.username}/setup/k8s/prepare_network.sh", - run: "always", - privileged: true - - # Run in dedicated deployment mode or hyperconverged mode - if ((deploy_Mode.to_i == DEPLOY_MODE_DEDICATED.to_i) || \ - (deploy_Mode.to_i == DEPLOY_MODE_HC.to_i)) - - # This runs only when the VM is first provisioned. - # Get the Master IP to join the cluster. - vmCfg.vm.provision :trigger, - :force => true, - :stdout => true, - :stderr => true do |trigger| - trigger.fire do - info"Getting the Master IP and Token to join the cluster..." - master_hostname = "kubemaster-01" - - get_ip_address = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'ip addr show | grep -oP \ - "inet \\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | grep -oP \"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}" \ - | sort | tail -n 1 | head -n 1') - - master_ip_address = `#{get_ip_address}` - if master_ip_address == "" - info"The Kubernetes Master is down, \ - bring it up and manually run: \ - configure_k8s_host.sh script on Kubernetes Minion." - else - get_token_name = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets \ - | grep bootstrap-token | cut -d " " -f1\ - | cut -d "-" -f3\ - | sed "s|\r||g" \ - ') - - token_name = `#{get_token_name}` - - get_token = %Q(vagrant ssh \ - #{master_hostname} -c \ - 'kubectl -n kube-system get secrets bootstrap-token-#{token_name.strip} \ - -o yaml | grep token-secret | cut -d ":" -f2 \ - | cut -d " " -f2 | base64 -d \ - | sed "s|{||g;s|}||g" \ - | sed "s|:|.|g" | xargs echo') - - token = `#{get_token}` - - if token != "" - get_token_sha = %Q(vagrant ssh \ - #{master_hostname} \ - -c 'openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt \ - | openssl rsa -pubin -outform der 2>/dev/null \ - | openssl dgst -sha256 -hex \ - | sed "s/^.* //"') - - token_sha = `#{get_token_sha}` - info"Using Master IP - #{master_ip_address.strip}" - info"Using Token - #{token_name.strip}.#{token.strip}" - info"Using Discovery Token SHA - #{token_sha.strip}" - - @machine.communicate.sudo("bash \ - /home/#{vmCfg.ssh.username}/setup/k8s/configure_k8s_host.sh \ - --masterip=#{master_ip_address.strip} \ - --token=#{token_name.strip}.#{token.strip} \ - --token-sha=#{token_sha.strip}") - else - info"Invalid Token. Check your Kubernetes setup." - end - end - end - end - end - end - end - end diff --git a/k8s/vagrant/CentOS_Vagrant.md b/k8s/vagrant/CentOS_Vagrant.md deleted file mode 100644 index cd03b5457b..0000000000 --- a/k8s/vagrant/CentOS_Vagrant.md +++ /dev/null @@ -1,349 +0,0 @@ -# Installing Kubernetes Cluster on CentOS 7.4 in Vagrant VMs - -We will be setting up a 3 node cluster comprising of 1 Master and 2 Worker Nodes running Kubernetes 1.8.5. - -## Prerequisites - -Verify that you have the following software installed on your machine: - -```bash -1. Vagrant (>=1.9.1) -2. VirtualBox 5.1 -``` - -## Create and Edit Vagrantfile for CentOS - -Run the following commands to create a Vagrantfile for CentOS. - -```bash -host-machine:~/mkdir k8s-demo -host-machine:~/cd k8s-demo -host-machine:~/k8s-demo$ vagrant init centos/7 -A `Vagrantfile` has been placed in this directory. You are now -ready to `vagrant up` your first virtual environment! Please read -the comments in the Vagrantfile as well as documentation on -`vagrantup.com` for more information on using Vagrant. - -``` - -Edit the generated Vagrantfile to look similar as below: - -```bash -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# All Vagrant configuration is done below. The "2" in Vagrant.configure -# configures the configuration version (we support older styles for -# backwards compatibility). Please don't change it unless you know what -# you're doing. -Vagrant.configure("2") do |config| - # The most common configuration options are documented and commented below. - # For a complete reference, please see the online documentation at - # https://docs.vagrantup.com. - - config.vm.box = "centos/7" - - config.vm.provider "virtualbox" do |vb| - vb.cpus = 2 - vb.memory = "2048" - end - - config.vm.define "master" do |vmCfg| - vmCfg.vm.hostname = "master" - vmCfg.vm.network "private_network", ip: "172.28.128.31" - end - - config.vm.define "worker-01" do |vmCfg| - vmCfg.vm.hostname = "worker-01" - vmCfg.vm.network "private_network", ip: "172.28.128.32" - end - - config.vm.define "worker-02" do |vmCfg| - vmCfg.vm.hostname = "worker-02" - vmCfg.vm.network "private_network", ip: "172.28.128.33" - end -end - -``` - -## Verify - -Verify the state of Vagrant VMs. You should see an output similar as this: - -```bash -host-machine:~/k8s-demo$ vagrant status -Current machine states: - -master not created (VirtualBox) -worker-01 not created (VirtualBox) -worker-02 not created (VirtualBox) - -This environment represents multiple VMs. The VMs are all listed -above with their current state. For more information about a specific -VM, run `vagrant status NAME`. -``` - -## Bringing up Vagrant VMs - -Just use *vagrant up* to bring up the VMs. - -```bash -host-machine:~/k8s-demo$ vagrant up -``` - -Verify the state of Vagrant VMs. You should see an output similar as this: - -```bash -host-machine:~/k8s-demo$ vagrant status -Current machine states: - -master running (VirtualBox) -worker-01 running (VirtualBox) -worker-02 running (VirtualBox) - -This environment represents multiple VMs. The VMs are all listed -above with their current state. For more information about a specific -VM, run `vagrant status NAME`. -``` - -## Before you begin - -- SSH into each of the Vagrant VMs and perform the following steps: - - - Update the /etc/hosts file. Your hosts file must look similar as below: - - ```bash - For Master /etc/hosts: - --------------------- - 172.28.128.31 master master - 127.0.0.1 localhost - - For Worker-01 /etc/hosts: - ------------------------ - 172.28.128.32 worker-01 worker-01 - 127.0.0.1 localhost - - For Worker-02 /etc/hosts: - ------------------------ - 172.28.128.33 worker-02 worker-02 - 127.0.0.1 localhost - - ``` - - - Update the /etc/resolv.conf file. Your resolv.conf file must look similar as below: - - ```bash - # Generated by NetworkManager - search domain.name - nameserver 8.8.8.8 - ``` - - - Disable Swap. You MUST disable swap in order for the kubelet to work properly.(For Kubernetes 1.8 and above.) - - ```bash - [vagrant@master ~]$ sudo swapoff -a - - ``` - - - Comment out lines containing "swap" in /etc/fstab with swap disabled. - - ```bash - [vagrant@master ~]$ sudo vi /etc/fstab - # - # /etc/fstab - # Created by anaconda on Sat Oct 28 11:03:00 2017 - # - # Accessible filesystems, by reference, are maintained under '/dev/disk' - # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info - # - /dev/mapper/VolGroup00-LogVol00 / xfs defaults 0 0 - UUID=8ffa0ee9-e1a8-4c03-acce-b65b342c6935 /boot xfs defaults 0 0 - - #Below line was commented as it contained swap. - #/dev/mapper/VolGroup00-LogVol01 swap swap defaults 0 0 - - ``` - - - On each of your vagrant machines, install Docker. Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/linux/docker-ce/centos/). - - - Once the docker installation is complete execute the below step to enable and start the docker service. - - ```bash - sudo systemctl enable docker && sudo systemctl start docker - ``` - - Setup Kubernetes repo details for installing Kubernetes binaries. - - ```bash - sudo tee -a /etc/yum.repos.d/kubernetes.repo </dev/null - [kubernetes] - name=Kubernetes - baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 - enabled=1 - gpgcheck=1 - repo_gpgcheck=1 - gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg - EOF - ``` - - Disable SELinux. You have to do this until SELinux support is improved in the kubelet. - - ```bash - # Disable SELinux by running setenforce 0 - # This is required to allow containers to access the host filesystem required by the pod networks. - sudo setenforce 0 - ``` - - Ensure the iptables flag in _sysctl_ configuration is set to 1. - - ```bash - sudo tee -a /etc/sysctl.d/k8s.conf </dev/null - net.bridge.bridge-nf-call-ip6tables = 1 - net.bridge.bridge-nf-call-iptables = 1 - EOF - ``` - - Reload the system configuration. - - ```bash - sudo sysctl --system - ``` - - Install kubeadm, kubelet and kubectl. - - ```bash - sudo yum install -y kubelet-1.8.5-0 kubeadm-1.8.5-0 kubectl-1.8.5-0 - ``` - - Ensure the _--cgroup-driver_ kubelet flag is set to the same value as Docker. - - ```bash - sudo sed -i -E 's/--cgroup-driver=systemd/--cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf - ``` - - Execute the below step to enable and start the kubelet service. - - ```bash - sudo systemctl enable kubelet && sudo systemctl start kubelet - ``` - -## Create Cluster using kubeadm - -- Perform the following operations on the __Master Node__. - - 1. Install wget. - - ```bash - sudo yum install -y wget - ``` - 2. Download and configure the JSON parser _jq_. - - ```bash - wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 - chmod +x ./jq - sudo mv jq /usr/bin - ``` - 3. Initialize your master. - - ```bash - sudo kubeadm init --apiserver-advertise-address= - ``` - 4. Configure the Kubernetes config. - - ```bash - mkdir -p $HOME/.kube - sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config - sudo chown $(id -u):$(id -g) $HOME/.kube/config - ``` - 5. Patch _kube-proxy_ for CNI Networks. - - ```bash - kubectl -n kube-system get ds -l 'k8s-app=kube-proxy' -o json \ - | jq '.items[0].spec.template.spec.containers[0].command |= .+ ["--proxy-mode=userspace"]' \ - | kubectl apply -f - \ - && kubectl -n kube-system delete pods -l 'k8s-app=kube-proxy' - ``` - - 6. Install Pod Network - Weave - - ```bash - export kubever=$(kubectl version | base64 | tr -d '\n') - kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" - ``` - -- Perform the following operations on the __Worker Nodes__. - - 1. Join the cluster - - ```bash - sudo kubeadm join --token : --discovery-token-ca-cert-hash sha256: - ``` - 2. Install ISCSI. - - ```bash - sudo yum install -y iscsi-initiator-utils - ``` - - 3. Execute the below step to enable and start the iscsid service. - - ```bash - sudo systemctl enable iscsid && sudo systemctl start iscsid - ``` - - ```bash - Note: OpenEBS uses iSCSI to connect to the block volumes. So steps 2 & 3 are required to configure an initiator on the worker nodes. - ``` - -## Setting Up OpenEBS Volume Provisioner - -- Download the _openebs-operator.yaml_ and _openebs-storageclasses.yaml_ on the Kubernetes Master. - -```bash -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-operator.yaml -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-storageclasses.yaml -``` - -- Apply the openebs-operator.yaml file on the Kubernetes cluster. This creates the maya api-server and OpenEBS provisioner deployments. - -```bash -kubectl apply -f openebs-operator.yaml -``` - -- Add the OpenEBS storage classes using the following command. This can be used by users to map a suitable storage profile for their applications in their respective persistent volume claims. - -```bash -kubectl apply -f openebs-storageclasses.yaml -``` - -- Check whether the deployments are running successfully using the following commands. - -```bash -vagrant@master:~$ kubectl get deployments -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -maya-apiserver 1 1 1 1 2m -openebs-provisioner 1 1 1 1 2m -``` - -- Check whether the pods are running successfully using the following commands. - -```bash -vagrant@master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -maya-apiserver-1633167387-5ss2w 1/1 Running 0 24s -openebs-provisioner-1174174075-f2ss6 1/1 Running 0 23s -``` - -- Check whether the storage classes are applied successfully using the following commands. - -```bash -vagrant@master:~$ kubectl get sc -NAME TYPE -openebs-cassandra openebs.io/provisioner-iscsi -openebs-es-data-sc openebs.io/provisioner-iscsi -openebs-jupyter openebs.io/provisioner-iscsi -openebs-kafka openebs.io/provisioner-iscsi -openebs-mongodb openebs.io/provisioner-iscsi -openebs-percona openebs.io/provisioner-iscsi -openebs-redis openebs.io/provisioner-iscsi -openebs-standalone openebs.io/provisioner-iscsi -openebs-standard openebs.io/provisioner-iscsi -openebs-zk openebs.io/provisioner-iscsi -``` - -## Running Stateful Workloads using OpenEBS - -- Some sample YAML files for stateful workloads using OpenEBS are provided [here](https://github.com/openebs/openebs/tree/master/k8s/demo). -- For more information visit [OpenEBS Documentation](https://docs.openebs.io/). diff --git a/k8s/vagrant/README.md b/k8s/vagrant/README.md deleted file mode 100644 index 9a49468d67..0000000000 --- a/k8s/vagrant/README.md +++ /dev/null @@ -1,163 +0,0 @@ -# Installing Kubernetes Clusters on Ubuntu 16.04 using Vagrant - -OpenEBS provides vagrant boxes with prepackaged Kuberentes images. There are different vagrant boxes created depending on the Kubernetes release. The Vagrantfiles are organized here based on the Kubernetes version used by the box. - -This Vagrantfile can be used on any machine with Virtualization *Enabled*, like a laptop or bare metal server. - -Procedures listed in this section will help you - -- Verify prerequisites -- Download Vagrantfile -- Setup Kubernetes Cluster -- Install kubectl on the host -- Setup access to Kubernetes UI/Dashboard (Vagrantfile version 1.7.5 onwards) -- Setup OpenEBS -- Launch demo pod - -*Note: The instructions are from an Ubuntu 16.06 host.* - -## Prerequisites: - -Verify that you have the following software installed on your Ubuntu 16.04 machine: -``` -1.Vagrant (>=1.9.1) -2.VirtualBox 5.1 -``` - -## Download and Verify - -Download the required Vagrantfile. Use curl, wget, git and so on, to download the required Vagrantfile. This example uses wget. - -``` -mkdir k8s-demo -cd k8s-demo -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/vagrant/1.7.5/Vagrantfile -vagrant status -``` - -### Verify - -You should see output similar to this: -``` -ubuntu-host:~/k8s-demo$ vagrant status -Current machine states: - -kubemaster-01 not created (VirtualBox) -kubeminion-01 not created (VirtualBox) -kubeminion-02 not created (VirtualBox) - -This environment represents multiple VMs. The VMs are all listed -above with their current state. For more information about a specific -VM, run `vagrant status NAME`. -``` - -## Bringing up K8s Cluster - -Just use *vagrant up* to bring up the cluster. - -``` -ubuntu-host:~/k8s-demo$ vagrant up -``` - -### Verify - -The output displayed will be similar to the following: -``` -kiran@kmaya:~/k8s-demo$ vagrant status -Current machine states: - -kubemaster-01 running (VirtualBox) -kubeminion-01 running (VirtualBox) -kubeminion-02 running (VirtualBox) - -This environment represents multiple VMs. The VMs are all listed -above with their current state. For more information about a specific -VM, run `vagrant status NAME`. -kiran@kmaya:~/k8s-demo$ -``` - -## Install kubectl on the Host - -Follow the procedures for [installing kubectl from binary](https://Kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl). - -``` -curl -LO https://storage.googleapis.com/Kubernetes-release/release/$(curl -s https://storage.googleapis.com/Kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -chmod +x ./kubectl -sudo mv ./kubectl /usr/local/bin/kubectl -``` - -### Verify - -``` -kiran@kmaya:~/k8s-demo$ kubectl version -Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} -The connection to the server localhost:8080 was refused - did you specify the right host or port? -kiran@kmaya:~/k8s-demo$ -``` - -The connection error is expected. The next step will configure the kubectl to contact the Kubernetes cluster. - -## Configure kube-config from the Installed Cluster - -``` -vagrant ssh kubemaster-01 -c "cat ~/.kube/config" > demo-kube-config -``` - -*Note: If you have a single Kubernetes cluster on your host, you could copy the demo-kube-config to ~/.kube/config, and avoid specifying the parameter --kubeconfig in the kubectl commands* - -### Verify - -``` -kiran@kmaya:~/k8s-demo$ kubectl --kubeconfig ./demo-kube-config version -Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} -Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} -kiran@kmaya:~/k8s-demo$ -``` - -## Setup Access to Kubernetes UI - -``` -kiran@kmaya:~/k8s-demo$ kubectl --kubeconfig ./demo-kube-config proxy -Starting to serve on 127.0.0.1:8001 -``` - -### Verify - -Launch the URL `http://127.0.0.1:8001/ui` - -**Your local Kubernetes cluster with the dashboard is ready. The below steps are required only if you would like to run stateful applications with OpenEBS** - -## Setup OpenEBS - -Fetch the latest *openebs-operator.yaml* and *openebs-storageclasses.yaml* [github - openebs/openebs](../) - -``` -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-operator.yaml -wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/openebs-storageclasses.yaml -``` - -Load the OpenEBS operator and storage classes onto your Kubernetes cluster - -``` -kubectl --kubeconfig ./demo-kube-config apply -f openebs-operator.yaml -kubectl --kubeconfig ./demo-kube-config apply -f openebs-storageclasses.yaml -``` - -### Verify - -On successful run of the above commands, you will see output like below: - -``` -kiran@kmaya:~/k8s-demo$ kubectl --kubeconfig ./demo-kube-config apply -f openebs-operator.yaml -serviceaccount "openebs-maya-operator" created -clusterrole "openebs-maya-operator" created -clusterrolebinding "openebs-maya-operator" created -deployment "maya-apiserver" created -service "maya-apiserver-service" created -deployment "openebs-provisioner" created -kiran@kmaya:~/k8s-demo$ kubectl --kubeconfig ./demo-kube-config apply -f openebs-storageclasses.yaml -storageclass "openebs-standard" created -storageclass "openebs-percona" created -storageclass "openebs-jupyter" created -kiran@kmaya:~/k8s-demo$ -``` - diff --git a/translations/CONTRIBUTING.ar.md b/translations/CONTRIBUTING.ar.md index 46cb5288a7..aef6983a19 100644 --- a/translations/CONTRIBUTING.ar.md +++ b/translations/CONTRIBUTING.ar.md @@ -39,7 +39,7 @@ [this](./contribute/labels-of-issues.md). * للمساهمة في K8s demo, يرجى الرجوع إلى هذا [document](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - للتحقق من كيفية عمل OpenEBS مع K8s, الرجوع إلى هذا [document](./k8s/README.md) + - للتحقق من كيفية عمل OpenEBS مع K8s, الرجوع إلى هذا [document](https://openebs.io/docs) - للمساهمة في Kubernetes OpenEBS Provisioner ، يرجى الرجوع إلى هذا [document](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). الرجوع إلى هذا [document](./contribute/design/code-structuring.md) لمزيد من المعلومات حول هيكلة الكود والمبادئ التوجيهية لاتباعها. diff --git a/translations/CONTRIBUTING.es.md b/translations/CONTRIBUTING.es.md index 1e5932c3c7..9fbbe868e4 100644 --- a/translations/CONTRIBUTING.es.md +++ b/translations/CONTRIBUTING.es.md @@ -8,10 +8,14 @@ Sin embargo, para aquellas personas que quieren un poco más de orientación sob Dicho esto, OpenEBS es una innovación en código abierto. Le invitamos a contribuir de cualquier manera que pueda, y toda la ayuda proporcionada es muy apreciada. -- [Plantee problemas para solicitar nuevas funciones, corregir documentación o informar errores.](#raising-issues) -- [Envíe cambios para mejorar la documentación.](#submit-change-to-improve-documentation) -- [Envíe propuestas para nuevas funciones / mejoras.](#submit-proposals-for-new-features) -- [Resuelva problemas existentes relacionados con la documentación o el código.](#contributing-to-source-code-and-bug-fixes) +- [Contribuir a OpenEBS](#contribuir-a-openebs) + - [Aumento de problemas](#aumento-de-problemas) + - [Enviar cambios para mejorar la documentación](#enviar-cambios-para-mejorar-la-documentación) + - [Enviar propuestas para nuevas características](#enviar-propuestas-para-nuevas-características) + - [Contribuir al código fuente y a las correcciones de errores](#contribuir-al-código-fuente-y-a-las-correcciones-de-errores) + - [Resolver problemas existentes](#resolver-problemas-existentes) + - [Firma tu trabajo](#firma-tu-trabajo) + - [Unirse a nuestra comunidad](#unirse-a-nuestra-comunidad) Hay algunas pautas simples que debe seguir antes de proporcionar sus hacks. @@ -39,7 +43,7 @@ Siempre hay algo más que se requiere para que sea más fácil adaptarse a sus c Proporcione a los archivos P etiquetas las etiquetas adecuadas para correcciones de errores o mejoras en el código fuente. Para obtener una lista de las etiquetas que se podrían utilizar, consulte [this](./contribute/labels-of-issues.md). * Para contribuir a la demostración de K8s, consulte este [documento](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - Para comprobar cómo funciona OpenEBS con K8, consulte este [documento](./k8s/README.md) + - Para comprobar cómo funciona OpenEBS con K8, consulte este [documento](https://openebs.io/docs) - Para contribuir a Kubernetes OpenEBS Provisioner, consulte este [documento](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). Consulte este [documento](./contribute/design/code-structuriing.md) para obtener más información sobre la estructuración de código y las directrices a seguir en el mismo. diff --git a/translations/CONTRIBUTING.fr.md b/translations/CONTRIBUTING.fr.md index 604f407481..3c62fc9318 100644 --- a/translations/CONTRIBUTING.fr.md +++ b/translations/CONTRIBUTING.fr.md @@ -8,10 +8,14 @@ Cependant, pour les personnes qui souhaitent un peu plus de conseils sur la meil Cela dit, OpenEBS est une innovation en Open Source. Vous êtes invités à contribuer de toutes les manières possibles et toute l'aide fournie est très appréciée. -- [Créer des issues pour demander de nouvelles fonctionnalités, corriger la documentation ou signaler des bogues.](#Créer-des-issues) -- [Soumettre des modifications pour améliorer la documentation.](#Soumettre-des-modifications-pour-améliorer-la-documentation) -- [Soumettre des propositions pour de nouvelles fonctionnalités / améliorations.](#Soumettre-des-propositions-pour-de-nouvelles-fonctionnalités) -- [Résoudre les problèmes existants liés à la documentation ou au code.](#Contribuer-au-code-source-et-aux-corrections-de-bogues) +- [Contribuer à OpenEBS](#contribuer-à-openebs) + - [Créer des issues](#créer-des-issues) + - [Soumettre des modifications pour améliorer la documentation](#soumettre-des-modifications-pour-améliorer-la-documentation) + - [Soumettre des propositions pour de nouvelles fonctionnalités](#soumettre-des-propositions-pour-de-nouvelles-fonctionnalités) + - [Contribuer au code source et aux corrections de bogues](#contribuer-au-code-source-et-aux-corrections-de-bogues) + - [Résoudre les problèmes existants](#résoudre-les-problèmes-existants) + - [Signez votre travail](#signez-votre-travail) + - [Rejoignez notre communauté](#rejoignez-notre-communauté) Il y a quelques directives simples que vous devez suivre avant de fournir vos hacks. @@ -39,7 +43,7 @@ Il y a toujours quelque chose de plus qui est nécessaire, pour faciliter l'adap Fournissez aux PR des tags appropriés pour les corrections de bogues ou les améliorations du code source. Pour une liste des balises qui pourraient être utilisées, voir [ceci](/contribute/labels-of-issues.md). * Pour contribuer à la démo de K8, veuillez vous référer à ce [document](/contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - Pour savoir comment OpenEBS fonctionne avec K8, reportez-vous à ce [document](./k8s/README.md) + - Pour savoir comment OpenEBS fonctionne avec K8, reportez-vous à ce [document](https://openebs.io/docs) - Pour contribuer à Kubernetes OpenEBS Provisioner, veuillez vous référer à ce [document](/contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). Reportez-vous à ce [document](/contribute/design/code-structuring.md) pour plus d'informations sur la structuration du code et les directives à suivre. diff --git a/translations/CONTRIBUTING.gu.md b/translations/CONTRIBUTING.gu.md index 11edbf1207..e9a6f74973 100644 --- a/translations/CONTRIBUTING.gu.md +++ b/translations/CONTRIBUTING.gu.md @@ -39,7 +39,7 @@ Source code માં સુધારાઓ અથવા વૃદ્ધિ માટે યોગ્ય ટૅગ્સ સાથે PR પ્રદાન કરો. ઉપયોગ કરી શકાય તેવા ટૅગ્સની સૂચિ માટે, જુઓ [this](./contribute/labels-of-issues.md). * K8s demo ને ફાળો આપવા માટે, કૃપા કરીને આનો સંદર્ભ લો [document](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - કેવી રીતે OpenEBS કામ કરે છે K8s સાથે, તે તપાસવા માટે આ નો સંદર્ભ લો [document](./k8s/README.md) + - કેવી રીતે OpenEBS કામ કરે છે K8s સાથે, તે તપાસવા માટે આ નો સંદર્ભ લો [document](https://openebs.io/docs) - Kubernetes OpenEBS Provisioner ફાળો આપવા માટે, કૃપા કરીને આનો સંદર્ભ લો [document](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). આનો સંદર્ભ લો [document](./contribute/design/code-structuring.md) code structuring અને guidelines ફોલ્લૉ કરવા માટે. diff --git a/translations/CONTRIBUTING.hi.md b/translations/CONTRIBUTING.hi.md index 9e5d087e93..92677f5456 100644 --- a/translations/CONTRIBUTING.hi.md +++ b/translations/CONTRIBUTING.hi.md @@ -41,7 +41,7 @@ - K8s डेमो में योगदान देने के लिए, कृपया इस [दस्तावेज़](./contribute/CONTRIBUTING-TO-K8S-DEMO.md) को देखें। - - OpenEBS K8s के साथ कैसे काम करता है, इसकी जाँच के लिए, इस [दस्तावेज](./k8s/README.md) का संदर्भ लें। + - OpenEBS K8s के साथ कैसे काम करता है, इसकी जाँच के लिए, इस [दस्तावेज](https://openebs.io/docs) का संदर्भ लें। - कुबेरनेट्स ओपनईबीएस प्रोविजनर के लिए योगदान करने के लिए, कृपया इस [दस्तावेज़](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md) को देखें। diff --git a/translations/CONTRIBUTING.id.md b/translations/CONTRIBUTING.id.md index d13b5689f6..86e65f96eb 100644 --- a/translations/CONTRIBUTING.id.md +++ b/translations/CONTRIBUTING.id.md @@ -40,7 +40,7 @@ Ada sesuatu yang lebih penting dari membuat sebuah fitur baru, membuatnya mudah Lampirkan PR dengan tag yang sesuai untuk fix bug atau memperbaharui _source code_. Untuk list tag, dapat dilihat di halaman [ini](./contribute/labels-of-issues.md). - Untuk kontribusi pada K8s demo, silahkan lihat [document ini](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - Untuk melihat apa yang sedang OpenEBS kerjakan pda K8s, lihat [dokumen ini](./k8s/README.md). + - Untuk melihat apa yang sedang OpenEBS kerjakan pda K8s, lihat [dokumen ini](https://openebs.io/docs). * Untuk kontribusi pada _Kubernetes OpenEBS Provisioner_, silihkan lihat [document ini](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). diff --git a/translations/CONTRIBUTING.ko.md b/translations/CONTRIBUTING.ko.md index e073c68672..46a6c411d6 100644 --- a/translations/CONTRIBUTING.ko.md +++ b/translations/CONTRIBUTING.ko.md @@ -39,7 +39,7 @@ OpenEBS는 오픈 소스의 혁신입니다. 어떤 방식으로든 기여하실 버그 수정이나 소스 코드 개선을 위한 적절한 태그를 PR에 제공하세요. 사용할 수 있는 태그 목록은 [이](./contribute/labels-of-issues.md)를 참조하세요. - K8s 데모에 기여하려면 이 [문서](./contribute/CONTRIBUTING-TO-K8S-DEMO.md)를 참조하세요. - - K8s에서 OpenEBS가 어떻게 작동하는지 확인하려면 이 [문서](./k8s/README.md)를 참조하세요. + - K8s에서 OpenEBS가 어떻게 작동하는지 확인하려면 이 [문서](https://openebs.io/docs)를 참조하세요. * Kubernetes OpenEBS Provisioner에 기여하려면 이 [문서](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md)를 참조하세요. diff --git a/translations/CONTRIBUTING.mal.md b/translations/CONTRIBUTING.mal.md index 9f11ea91f6..949fbaa01c 100644 --- a/translations/CONTRIBUTING.mal.md +++ b/translations/CONTRIBUTING.mal.md @@ -40,7 +40,7 @@ ബഗ് പരിഹരിക്കലുകൾക്കോ ഉറവിട കോഡിലേക്കുള്ള മെച്ചപ്പെടുത്തലുകൾക്കോ ഉചിതമായ ടാഗുകൾ ഉപയോഗിച്ച് PR- കൾ നൽകുക. ഉപയോഗിക്കാവുന്ന ടാഗുകളുടെ ഒരു ലിസ്റ്റിനായി, [ഇത്](./contribute/labels-of-issues.md) കാണുക. * കെ 8 എസ് ഡെമോയിലേക്ക് സംഭാവന ചെയ്യുന്നതിന്, ദയവായി ഇത് റഫർ ചെയ്യുക [പ്രമാണം](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - കെ 8 കളിൽ ഓപ്പൺഇബിഎസ് എങ്ങനെ പ്രവർത്തിക്കുന്നുവെന്ന് പരിശോധിക്കുന്നതിന്, ഇത് പരിശോധിക്കുക [പ്രമാണം](./k8s/README.md) + - കെ 8 കളിൽ ഓപ്പൺഇബിഎസ് എങ്ങനെ പ്രവർത്തിക്കുന്നുവെന്ന് പരിശോധിക്കുന്നതിന്, ഇത് പരിശോധിക്കുക [പ്രമാണം](https://openebs.io/docs) - കുബേർനെറ്റ്സ് ഓപ്പൺഇബിഎസ് പ്രൊവിഷനറിലേക്ക് സംഭാവന ചെയ്യുന്നതിന്, ദയവായി ഇത് റഫർ ചെയ്യുക [പ്രമാണം](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). ഇത് റഫർ ചെയ്യുക [പ്രമാണം](./contribute/design/code-structuring.md) കോഡ് ഘടനയെക്കുറിച്ചും അതേക്കുറിച്ച് പിന്തുടരേണ്ട മാർഗ്ഗനിർദ്ദേശങ്ങളെക്കുറിച്ചും കൂടുതൽ വിവരങ്ങൾക്ക്. diff --git a/translations/CONTRIBUTING.np.md b/translations/CONTRIBUTING.np.md index f5e9c27290..d64d0ee493 100644 --- a/translations/CONTRIBUTING.np.md +++ b/translations/CONTRIBUTING.np.md @@ -40,7 +40,7 @@ स्रोत कोडमा बग फिक्सहरू वा संवर्द्धनका लागि उचित ट्यागहरूको साथ PR प्रदान गर्नुहोस्। प्रयोग गर्न सकिने ट्यागहरूको सूचीका लागि [this](./contribute/labels-of-issues.md) हेर्नुहोस्। - K8s डेमोमा योगदानका लागि, कृपया यो [कागजात](./contribute/CONTRIBUTING-TO-K8S-DEMO.md) मा सन्दर्भ गर्नुहोस्। - - कसरी KEs का साथ ओपनईबीएस ले काम गर्दछ भनेर जाँच गर्नका लागि यस [कागजात](./k8s/README.md) मा सन्दर्भ गर्नुहोस्। + - कसरी KEs का साथ ओपनईबीएस ले काम गर्दछ भनेर जाँच गर्नका लागि यस [कागजात](https://openebs.io/docs) मा सन्दर्भ गर्नुहोस्। - कुबर्नेट्स ओपनईबीएस प्रोविजरमा योगदान गर्नका लागि, कृपया यो [कागजात](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md) मा सन्दर्भ गर्नुहोस्। कोड संरचना र यसमा पछ्याउन दिशानिर्देशहरूमा थप जानकारीको लागि यस [कागजात](./contribute/design/code-structuring.md) मा सन्दर्भ गर्नुहोस्। diff --git a/translations/CONTRIBUTING.pl.md b/translations/CONTRIBUTING.pl.md index 5f71fac7f8..54e5f67a59 100644 --- a/translations/CONTRIBUTING.pl.md +++ b/translations/CONTRIBUTING.pl.md @@ -8,10 +8,14 @@ Jednak dla tych osób, które chcą nieco więcej wskazówek na temat najlepszeg To powiedziawszy, OpenEBS jest innowacją w Open Source. Możesz wnieść swój wkład w każdy możliwy sposób, a wszelka udzielona pomoc jest bardzo cenna. -- [Zgłaszaj problemy, aby poprosić o nową funkcjonalność, naprawić dokumentację lub zgłosić błędy.](#zgłaszanie-problemów) -- [Prześlij zmiany, aby ulepszyć dokumentację.](#prześlij-zmianę-by-ulepszyć-dokumentację) -- [Prześlij propozycje nowych funkcji / ulepszeń.](#prześlij-propozycje-nowych-funkcji) -- [Rozwiąż istniejące problemy związane z dokumentacją lub kodem.](#współtworzenie-kodu-źródłowego-i-naprawianie-błędów) +- [Wkład w OpenEBS](#wkład-w-openebs) + - [Zgłaszanie problemów](#zgłaszanie-problemów) + - [Prześlij zmianę by ulepszyć dokumentację](#prześlij-zmianę-by-ulepszyć-dokumentację) + - [Prześlij propozycje nowych funkcji](#prześlij-propozycje-nowych-funkcji) + - [Współtworzenie kodu źródłowego i naprawianie błędów](#współtworzenie-kodu-źródłowego-i-naprawianie-błędów) + - [Rozwiąż istniejące problemy](#rozwiąż-istniejące-problemy) + - [Podpisz swoją pracę](#podpisz-swoją-pracę) + - [Dołącz do naszej społeczności](#dołącz-do-naszej-społeczności) Jest kilka prostych wskazówek, które musisz przestrzegać przed udostępnieniem wkładu. @@ -39,7 +43,7 @@ Zawsze są pewne rzeczy, które są potrzebne w oprogramowaniu, które mogłyby Dostarcz PR z odpowiednimi tagami do poprawek błędów lub ulepszeń w kodzie źródłowym. Aby zapoznać się z listą tagów, których można użyć, zobacz [ten dokument](./contrib/labels-of-issues.md). * Aby wnieść wkład w demo K8s, zapoznaj się z tym [dokumentem](./contrib/CONTRIBUTING-TO-K8S-DEMO.md). - - Aby sprawdzić, jak OpenEBS współpracuje z K8s, zapoznaj się z tym [dokumentem](./k8s/README.md) + - Aby sprawdzić, jak OpenEBS współpracuje z K8s, zapoznaj się z tym [dokumentem](https://openebs.io/docs) - Aby wnieść wkład w Kubernetes OpenEBS Provisioner, zapoznaj się z tym [dokumentem](./contrib/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). Zapoznaj się z tym [dokumentem](./contrib/design/code-structuring.md), aby uzyskać więcej informacji na temat struktury kodu i wskazówek, jak postępować. diff --git a/translations/CONTRIBUTING.pt-BR.md b/translations/CONTRIBUTING.pt-BR.md index 85124b2b81..a6d68423a3 100644 --- a/translations/CONTRIBUTING.pt-BR.md +++ b/translations/CONTRIBUTING.pt-BR.md @@ -8,10 +8,14 @@ Contudo, para os indivíduos que querem um pouco mais de orientação nas melhor Dito isso, OpenEBS é uma inovação em Open Source. Você tem boas vindas para contribuir em qualquer maineira que possa e toda a ajuda fornecida é muito apreciada. -- [Levante um problema para solicitar novas funcionalidades, corrigir documentação ou reportar bugs.](#levantando-problemas) -- [Envie alterações para aprimorar a documentação.](#envie-alterações-para-aprimorar-a-documentação) -- [Envie propostas para novas funcionalidades/melhorias.](#envie-propostas-para-novas-funcionalidades) -- [Corrija problemas existentes relacionados à documentação ou código.](#contribua-ao-código-fonte-e-correção-de-bugs) +- [Contribuindo à OpenEBS](#contribuindo-à-openebs) + - [Levantando Problemas](#levantando-problemas) + - [Envie alterações para aprimorar a documentação](#envie-alterações-para-aprimorar-a-documentação) + - [Envie propostas para novas funcionalidades](#envie-propostas-para-novas-funcionalidades) + - [Contribua ao código fonte e correção de bugs](#contribua-ao-código-fonte-e-correção-de-bugs) + - [Corrija problemas existentes](#corrija-problemas-existentes) + - [Assine seu trabalho](#assine-seu-trabalho) + - [Entre na nossa comunidade](#entre-na-nossa-comunidade) Tem algumas diretrizes simples que você deve seguir antes de fornecer seus hacks. @@ -39,7 +43,7 @@ Sempre existe algo mais que é requerido, para tornar mais fácil e encaixar com Forneça Pull Requests com tags apropriadas para correções de bugs ou melhorias ao código fonte. Para uma lista de tags que podem ser utilizadas, veja [isto](/contribute/labels-of-issues.md). * Para contribuir com demonstrações K8s, por favor consulte este [documento](/contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - Para verificar como OpenEBS funciona com K8s, consulte este [documento](/k8s/README.md) + - Para verificar como OpenEBS funciona com K8s, consulte este [documento](https://openebs.io/docs) - Para contribuir ao Provisioner Kubernetes OpenEBS, por favor consulte este [documento](/contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). Consulte este [documento](/contribute/design/code-structuring.md) para mais informações sobre estruturação de código e guias para serem seguidos. diff --git a/translations/CONTRIBUTING.pu.MD b/translations/CONTRIBUTING.pu.MD index 6600848838..5d662444e8 100644 --- a/translations/CONTRIBUTING.pu.MD +++ b/translations/CONTRIBUTING.pu.MD @@ -38,7 +38,7 @@ ਸਰੋਤ ਕੋਡ ਵਿੱਚ ਬੱਗ ਫਿਕਸ ਜਾਂ ਸੁਧਾਰ ਲਈੁਕਵੇਂ ਟੈਗਾਂ ਨਾਲ PR ਪ੍ਰਦਾਨ ਕਰੋ. ਉਹਨਾਂ ਟੈਗਾਂ ਦੀ ਸੂਚੀ ਲਈ ਜਿਹੜੀਆਂ ਵਰਤੀਆਂ ਜਾ ਸਕਦੀਆਂ ਹਨ, [ਵੇਖੋ] (./contribute/labels-of-issues.md) ਵੇਖੋ. * ਕੇ 8 ਐਸ ਡੈਮੋ ਵਿੱਚ ਯੋਗਦਾਨ ਪਾਉਣ ਲਈ, ਕਿਰਪਾ ਕਰਕੇ ਇਸ [ਦਸਤਾਵੇਜ਼](./contribute/CONTRIBUTING-TO-K8S-DEMO.md) ਦਾ ਹਵਾਲਾ ਲਓ . - - ਇਹ ਵੇਖਣ ਲਈ ਕਿ ਓਪਨਈਬੀਐਸ ਕੇ 8 ਐਸ ਨਾਲ ਕਿਵੇਂ ਕੰਮ ਕਰਦਾ ਹੈ, ਇਸ [ਦਸਤਾਵੇਜ਼](./k8s/README.md) ਨੂੰ ਵੇਖੋ. + - ਇਹ ਵੇਖਣ ਲਈ ਕਿ ਓਪਨਈਬੀਐਸ ਕੇ 8 ਐਸ ਨਾਲ ਕਿਵੇਂ ਕੰਮ ਕਰਦਾ ਹੈ, ਇਸ [ਦਸਤਾਵੇਜ਼](https://openebs.io/docs) ਨੂੰ ਵੇਖੋ. - ਕੁਬਰਨੇਟਸ ਓਪਨਈਬੀਐਸ ਪ੍ਰੋਵਿਜ਼ਨਰ ਵਿੱਚ ਯੋਗਦਾਨ ਪਾਉਣ ਲਈ, ਕਿਰਪਾ ਕਰਕੇ ਇਸ [ਦਸਤਾਵੇਜ਼](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). ਨੂੰ ਵੇਖੋ. ਇਸ [ਦਸਤਾਵੇਜ਼](./contribute/design/code-structuring.md) ਨੂੰ ਵੇਖੋ ਕੋਡ ਦੇਚੇ ਬਾਰੇ ਵਧੇਰੇ ਜਾਣਕਾਰੀ ਲਈ ਅਤੇ ਉਸੇ ਦੀ ਪਾਲਣਾ ਕਰਨ ਲਈ ਦਿਸ਼ਾ ਨਿਰਦੇਸ਼. diff --git a/translations/CONTRIBUTING.ru.md b/translations/CONTRIBUTING.ru.md index 08978a796f..1fb7b2b158 100644 --- a/translations/CONTRIBUTING.ru.md +++ b/translations/CONTRIBUTING.ru.md @@ -8,10 +8,14 @@ Тем не менее, OpenEBS - это инновация в Open Source. Вы можете внести свой вклад любым возможным способом, и вся помощь, оказанная нам, очень ценится. -- [Открытие issue по запросу нового функционала, фикса документации или багов](#открытие-issue) -- [Предложение улучшений документации](#улучшение-документации) -- [Предложение нового функционала](#предложения-нового-функционала) -- [Решение сушествуюших проблем в документации или коде](#контрибьюшен-в-исходный-код-и-фикс-багов) +- [Как внести свой вклад в OpenEBS](#как-внести-свой-вклад-в-openebs) + - [Открытие issue](#открытие-issue) + - [Улучшение документации](#улучшение-документации) + - [Предложения нового функционала](#предложения-нового-функционала) + - [Контрибьюшен в исходный код и фикс багов](#контрибьюшен-в-исходный-код-и-фикс-багов) + - [Решение существующих задач](#решение-существующих-задач) + - [Подпишите свою работу](#подпишите-свою-работу) + - [Присоединяйтесь к коммьюнити!](#присоединяйтесь-к-коммьюнити) Вот несколько простых руководств, которым нужно следовать перед тем, как отправить свой код. @@ -39,7 +43,7 @@ Предоставляйте пулл реквесты с подходящими тегами для баг фиксов или улучшений исходного кода. Список тегов которые могут быть использованы есть [тут](./contribute/labels-of-issues.md). * Для контрибьюта в K8s demo, просмотрите этот [документ](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - Для информации как OpenEBS работает с K8s, просмотрите этот [документ](./k8s/README.md) + - Для информации как OpenEBS работает с K8s, просмотрите этот [документ](https://openebs.io/docs) - Для контрибьюта в Kubernetes OpenEBS Provisioner, просмотрите этот [документ](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). Для информации о структуре кода и руководствах которым нужно следовать, просмотрите [этот документ](./contribute/design/code-structuring.md). diff --git a/translations/CONTRIBUTING.tr.md b/translations/CONTRIBUTING.tr.md index 78a3102116..8cecedd89d 100644 --- a/translations/CONTRIBUTING.tr.md +++ b/translations/CONTRIBUTING.tr.md @@ -8,10 +8,14 @@ Ancak, projeye katkıda bulunmanın en iyi yolu hakkında biraz daha fazla rehbe Bununla birlikte, OpenEBS Açık Kaynakta bir yazılımdır. Herhangi bir şekilde katkıda bulunabilirsiniz ve sağlanan tüm yardımlar takdir edilmektedir. -- [Yeni işlevsellik istemek, belgeleri düzeltmek veya hataları bildirmek için sorunları yükseltin.](#sorunları-yükseltme) -- [Belgeleri iyileştirmek için değişiklikleri gönderin.](#belgeleri-geliştirmek-için-değişiklik-gönder) -- [Yeni özellikler / geliştirmeler için teklifler gönderin.](#yeni-özellikler-için-öneriler-gönderin) -- [Belgeleme veya kod ile ilgili mevcut sorunları çözün.](#kaynak-kod-ve-hata-düzeltmelerine-katkı-sağlamak) +- [OpenEBS'ye katkıda bulunmak](#openebsye-katkıda-bulunmak) +- [Sorunları Yükseltme](#sorunları-yükseltme) +- [Belgeleri Geliştirmek İçin Değişiklik Gönder](#belgeleri-geliştirmek-i̇çin-değişiklik-gönder) +- [Yeni Özellikler için Öneriler Gönderin](#yeni-özellikler-için-öneriler-gönderin) +- [Kaynak Kod ve Hata Düzeltmelerine Katkı Sağlamak](#kaynak-kod-ve-hata-düzeltmelerine-katkı-sağlamak) +- [Mevcut Sorunları Çözme](#mevcut-sorunları-çözme) + - [İşini imzala](#i̇şini-imzala) +- [Topluluğumuza Katılın](#topluluğumuza-katılın) Hack'lerinizi vermeden önce izlemeniz gereken birkaç basit yönerge vardır. @@ -39,7 +43,7 @@ Kullanım durumlarınıza uymayı kolaylaştırmak için her zaman gereken daha PR'leri hata düzeltmeleri veya kaynak kodundaki geliştirmeler için uygun etiketlerle sağlayın. Kullanılabilecek etiketlerin listesi için, [bkz.](../contribute/labels-of-issues.md). * K8s demosuna katkıda bulunmak için lütfen bu [belge] bölümüne bakınız.(../contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - OpenEBS'in K8'lerle nasıl çalıştığını kontrol etmek için, bu belge bölümüne [bakın.](../k8s/README.md) + - OpenEBS'in K8'lerle nasıl çalıştığını kontrol etmek için, bu belge bölümüne [bakın.](https://openebs.io/docs) - Kubernetes OpenEBS Provisioner'a katkıda bulunmak için lütfen bu dokümana [bakınız.](../contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). Kod yapılandırması ve takip edilecek yönergeler hakkında daha fazla bilgi için bu belge bölümüne [bakın.](../contribute/design/code-structuring.md) diff --git a/translations/CONTRIBUTING.ua.md b/translations/CONTRIBUTING.ua.md index 9801b404a9..2e47282f40 100644 --- a/translations/CONTRIBUTING.ua.md +++ b/translations/CONTRIBUTING.ua.md @@ -8,10 +8,14 @@ Тем не менш, OpenEBS - це інновація у Open Source. Ви можете внести свій внесок у будь-який можливий спосіб, та вся допомога, оказана нам, дуже цінується. -- [Відкиття issue на запит нового функціоналу, фікса документації або багів](#відкриття-issue) -- [Пропозиції щодо покращення документації](#покращення-документації) -- [Пропозиції щодо нового функціоналу](#пропонування-нового-функціоналу) -- [Вирішення існуючих проблем у документації або коді](#контриб'ют-у-початковий-код-й-фікс-багів) +- [Як зробити свій внесок у OpenEBS](#як-зробити-свій-внесок-у-openebs) + - [Відкриття issue](#відкриття-issue) + - [Покращення документації](#покращення-документації) + - [Пропонування нового функціоналу](#пропонування-нового-функціоналу) + - [Контриб'ют у початковий код й фікс багів](#контрибют-у-початковий-код-й-фікс-багів) + - [Робота над існуючими задачами](#робота-над-існуючими-задачами) + - [Підпишить свою роботу!](#підпишить-свою-роботу) + - [Приєднуйтесь до ком'юніті!](#приєднуйтесь-до-комюніті) Ось декілька простих пунктів, яким потрібно прямувати перед тим,як відправляти свій код. @@ -39,7 +43,7 @@ Створюйте рулл реквести з відповідними тегами для фіксів багів або покращень початкового коду. Лист тегів які можуть бути використані [тут](./contribute/labels-of-issues.md). * Для контриб'юту у K8s demo, дивиться цей [документ](./contribute/CONTRIBUTING-TO-K8S-DEMO.md). - - Для інформації як OpenEBS працює з K8s, дивиться цей [документ](./k8s/README.md) + - Для інформації як OpenEBS працює з K8s, дивиться цей [документ](https://openebs.io/docs) - Для контриб'юту у Kubernetes OpenEBS Provisioner, дивиться цей [документ](./contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md). Для інформації о структурі коду й інструкціях, які потрібно виконувати, дивиться [цей документ](./contribute/design/code-structuring.md). diff --git a/translations/CONTRIBUTING.zh.md b/translations/CONTRIBUTING.zh.md index 2ea7206371..d6a5f279d9 100644 --- a/translations/CONTRIBUTING.zh.md +++ b/translations/CONTRIBUTING.zh.md @@ -39,7 +39,7 @@ 请在问题修复或代码改善的 PR 上添加合适的标签。可用的标签列表请参见[这里](./../contribute/labels-of-issues.md)。 * 关于贡献 K8s demo,请参考这个[文档](./../contribute/CONTRIBUTING-TO-K8S-DEMO.md)。 - - 要了解 OpenEBS 如何与 K8s 结合,请参考这个[文档](./../k8s/README.md) + - 要了解 OpenEBS 如何与 K8s 结合,请参考这个[文档](https://openebs.io/docs) - 关于参与贡献 Kubernetes OpenEBS Provisioner,请参考这个[文档](./../contribute/CONTRIBUTING-TO-KUBERNETES-OPENEBS-PROVISIONER.md)。 关于代码结构和指南的更多信息,请参考这个 [文档](./../contribute/design/code-structuring.md)