Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[incubator/elasticsearch] Promote elasticsearch to stable #7180

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 4 additions & 9 deletions incubator/elasticsearch/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# elasticsearch has been promoted to stable
deprecated: true
name: elasticsearch
home: https://www.elastic.co/products/elasticsearch
version: 1.4.1
version: 1.4.2
appVersion: 6.3.1
description: Flexible and powerful open source, distributed real-time search and analytics
description: DEPRECATED Flexible and powerful open source, distributed real-time search and analytics
engine.
icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg
sources:
Expand All @@ -12,10 +14,3 @@ sources:
- https://github.com/GoogleCloudPlatform/elasticsearch-docker
- https://github.com/clockworksoul/helm-elasticsearch
- https://github.com/pires/kubernetes-elasticsearch-cluster
maintainers:
- name: simonswine
email: christian@jetstack.io
- name: icereval
email: michael.haselton@gmail.com
- name: rendhalver
email: pete.brown@powerhrg.com
2 changes: 2 additions & 0 deletions incubator/elasticsearch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery.
Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions.

**Note - this chart has been deprecated and [moved to stable](../../stable/elasticsearch)**.

## Warning for previous users
If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC.
If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart.
Expand Down
5 changes: 5 additions & 0 deletions incubator/elasticsearch/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
The elasticsearch cluster has been installed.

***
Please note that this chart has been deprecated and moved to stable.
Going forward please use the stable version of this chart.
***

Elasticsearch can be accessed:

* Within your cluster, at the following DNS name at port 9200:
Expand Down
3 changes: 3 additions & 0 deletions stable/elasticsearch/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
.git
# OWNERS file for Kubernetes
OWNERS
21 changes: 21 additions & 0 deletions stable/elasticsearch/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: elasticsearch
home: https://www.elastic.co/products/elasticsearch
version: 1.5.0
appVersion: 6.3.1
description: Flexible and powerful open source, distributed real-time search and analytics
engine.
icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg
sources:
- https://www.elastic.co/products/elasticsearch
- https://github.com/jetstack/elasticsearch-pet
- https://github.com/giantswarm/kubernetes-elastic-stack
- https://github.com/GoogleCloudPlatform/elasticsearch-docker
- https://github.com/clockworksoul/helm-elasticsearch
- https://github.com/pires/kubernetes-elasticsearch-cluster
maintainers:
- name: simonswine
email: christian@jetstack.io
- name: icereval
email: michael.haselton@gmail.com
- name: rendhalver
email: pete.brown@powerhrg.com
6 changes: 6 additions & 0 deletions stable/elasticsearch/OWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
approvers:
- simonswine
- icereval
reviewers:
- simonswine
- icereval
187 changes: 187 additions & 0 deletions stable/elasticsearch/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
# Elasticsearch Helm Chart

This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery.
Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions.

## Warning for previous users
If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC.
If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart.
The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version.
If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0.

## Prerequisites Details

* Kubernetes 1.6+
* PV dynamic provisioning support on the underlying infrastructure

## StatefulSets Details
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

## StatefulSets Caveats
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations

## Todo

* Implement TLS/Auth/Security
* Smarter upscaling/downscaling
* Solution for memory locking

## Chart Details
This chart will do the following:

* Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments
* Multi-role deployment: master, client (coordinating) and data nodes
* Statefulset Supports scaling down without degrading the cluster

## Installing the Chart

To install the chart with the release name `my-release`:

```bash
$ helm install --name my-release stable/elasticsearch
```

## Deleting the Charts

Delete the Helm deployment as normal

```
$ helm delete my-release
```

Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them:

```
$ kubectl delete pvc -l release=my-release,component=data
```

## Configuration

The following table lists the configurable parameters of the elasticsearch chart and their default values.

| Parameter | Description | Default |
| ------------------------------------ | ------------------------------------------------------------------- | ------------------------------------ |
| `appVersion` | Application Version (Elasticsearch) | `6.3.1` |
| `image.repository` | Container image name | `docker.elastic.co/elasticsearch/elasticsearch-oss` |
| `image.tag` | Container image tag | `6.3.1` |
| `image.pullPolicy` | Container pull policy | `Always` |
| `cluster.name` | Cluster name | `elasticsearch` |
| `cluster.kubernetesDomain` | Kubernetes cluster domain name | `cluster.local` |
| `cluster.xpackEnable` | Writes the X-Pack configuration options to the configuration file | `false` |
| `cluster.config` | Additional cluster config appended | `{}` |
| `cluster.env` | Cluster environment variables | `{}` |
| `client.name` | Client component name | `client` |
| `client.replicas` | Client node replicas (deployment) | `2` |
| `client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` |
| `client.priorityClassName` | Client priorityClass | `nil` |
| `client.heapSize` | Client node heap size | `512m` |
| `client.podAnnotations` | Client Deployment annotations | `{}` |
| `client.nodeSelector` | Node labels for client pod assignment | `{}` |
| `client.tolerations` | Client tolerations | `{}` |
| `client.serviceAnnotations` | Client Service annotations | `{}` |
| `client.serviceType` | Client service type | `ClusterIP` |
| `master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` |
| `master.name` | Master component name | `master` |
| `master.replicas` | Master node replicas (deployment) | `2` |
| `master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` |
| `master.priorityClassName` | Master priorityClass | `nil` |
| `master.podAnnotations` | Master Deployment annotations | `{}` |
| `master.nodeSelector` | Node labels for master pod assignment | `{}` |
| `master.tolerations` | Master tolerations | `{}` |
| `master.heapSize` | Master node heap size | `512m` |
| `master.name` | Master component name | `master` |
| `master.persistence.enabled` | Master persistent enabled/disabled | `true` |
| `master.persistence.name` | Master statefulset PVC template name | `data` |
| `master.persistence.size` | Master persistent volume size | `4Gi` |
| `master.persistence.storageClass` | Master persistent volume Class | `nil` |
| `master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` |
| `data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` |
| `data.replicas` | Data node replicas (statefulset) | `2` |
| `data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` |
| `data.priorityClassName` | Data priorityClass | `nil` |
| `data.heapSize` | Data node heap size | `1536m` |
| `data.persistence.enabled` | Data persistent enabled/disabled | `true` |
| `data.persistence.name` | Data statefulset PVC template name | `data` |
| `data.persistence.size` | Data persistent volume size | `30Gi` |
| `data.persistence.storageClass` | Data persistent volume Class | `nil` |
| `data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` |
| `data.podAnnotations` | Data StatefulSet annotations | `{}` |
| `data.nodeSelector` | Node labels for data pod assignment | `{}` |
| `data.tolerations` | Data tolerations | `{}` |
| `data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` |
| `data.antiAffinity` | Data anti-affinity policy | `soft` |

Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.

In terms of Memory resources you should make sure that you follow that equation:

- `${role}HeapSize < ${role}MemoryRequests < ${role}MemoryLimits`

The YAML value of cluster.config is appended to elasticsearch.yml file for additional customization ("script.inline: on" for example to allow inline scripting)

# Deep dive

## Application Version

This chart aims to support Elasticsearch v2 and v5 deployments by specifying the `values.yaml` parameter `appVersion`.

### Version Specific Features

* Memory Locking *(variable renamed)*
* Ingest Node *(v5)*
* X-Pack Plugin *(v5)*

Upgrade paths & more info: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html

## Mlocking

This is a limitation in kubernetes right now. There is no way to raise the
limits of lockable memory, so that these memory areas won't be swapped. This
would degrade performance heavily. The issue is tracked in
[kubernetes/#3595](https://github.com/kubernetes/kubernetes/issues/3595).

```
[WARN ][bootstrap] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[WARN ][bootstrap] This can result in part of the JVM being swapped out.
[WARN ][bootstrap] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
```

## Minimum Master Nodes
> The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster.

>When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge.

>This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place.

>This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1

More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes

# Client and Coordinating Nodes

Elasticsearch v5 terminology has updated, and now refers to a `Client Node` as a `Coordinating Node`.

More info: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-node.html#coordinating-node

## Select right storage class for SSD volumes

### GCE + Kubernetes 1.5

Create StorageClass for SSD-PD

```
$ kubectl create -f - <<EOF
kind: StorageClass
apiVersion: extensions/v1beta1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
EOF
```
Create cluster with Storage class `ssd` on Kubernetes 1.5+

```
$ helm install stable/elasticsearch --name my-release --set data.storageClass=ssd,data.storage=100Gi
```
31 changes: 31 additions & 0 deletions stable/elasticsearch/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
The elasticsearch cluster has been installed.

Elasticsearch can be accessed:

* Within your cluster, at the following DNS name at port 9200:

{{ template "elasticsearch.client.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local

* From outside the cluster, run these commands in the same shell:
{{- if contains "NodePort" .Values.client.serviceType }}

export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "elasticsearch.client.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.client.serviceType }}

WARNING: You have likely exposed your Elasticsearch cluster direct to the internet.
Elasticsearch does not implement any security for public facing clusters by default.
As a minimum level of security; switch to ClusterIP/NodePort and place an Nginx gateway infront of the cluster in order to lock down access to dangerous HTTP endpoints and verbs.

NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "elasticsearch.client.fullname" . }}'

export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "elasticsearch.client.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:9200
{{- else if contains "ClusterIP" .Values.client.serviceType }}

export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "elasticsearch.name" . }},component={{ .Values.client.name }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 9200:9200
{{- end }}
48 changes: 48 additions & 0 deletions stable/elasticsearch/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elasticsearch.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}

{{/*
Create a default fully qualified client name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.client.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.client.name }}
{{- end -}}

{{/*
Create a default fully qualified data name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.data.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.data.name }}
{{- end -}}

{{/*
Create a default fully qualified master name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.master.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.master.name }}
{{- end -}}
Loading