Skip to content

smarterclayton/elasticsearch-operator

 
 

Repository files navigation

elasticsearch-operator

Build Status

WORK IN PROGRESS

Elasticsearch operator to run Elasticsearch cluster on top of Openshift and Kubernetes. Operator uses Operator Framework SDK.

Why use operator?

Operator is designed to provide self-service for the Elasticsearch cluster operations. See the diagram on operator maturity model.

  • Elasticsearch operator ensures proper layout of the pods
  • Elasticsearch operator enables proper rolling cluster restarts
  • Elasticsearch operator provides kubectl interface to manage your Elasticsearch cluster
  • Elasticsearch operator provides kubectl interface to monitor your Elasticsearch cluster

Getting started

Prerequisites

  • Cluster administrator must set vm.max_map_count sysctl to 262144 on the host level of each node in your cluster prior to running the operator.
  • In case hostmounted volume is used, the directory on the host must have 777 permissions and the following selinux labels (TODO).
  • In case secure cluster is used the certificates must be pre-generated and uploaded to the secret <elasticsearch_cluster_name>-certs

Kubernetes

Make sure certificates are pre-generated and deployed as secret. Upload the Custom Resource Definition to your Kubernetes cluster:

$ kubectl create -f deploy/crd.yaml

Deploy the required roles to the cluster:

$ kubectl create -f deploy/rbac.yaml

Deploy custom resource and the Deployment resource of the operator:

$ kubectl create -f deploy/cr.yaml
$ kubectl create -f deploy/operator.yaml

OpenShift

As a cluster admin apply the template with the roles and permissions:

$ oc process -f deploy/openshift/admin-elasticsearch-template.yaml | oc apply -f -

The template deploys CRD, roles and rolebindings. You can pass variables:

  • NAMESPACE to specify which namespace's default ServiceAccount will be allowed to manage the Custom Resource.
  • ELASTICSEARCH_ADMIN_USER to specify which user of OpenShift will be allowed to manage the Custom Resource.

For example:

$ oc process NAMESPACE=myproject ELASTICSEARCH_ADMIN_USER=developer -f deploy/openshift/admin-elasticsearch-template.yaml | oc apply -f -

In case later-on grant permissions to extra users by giving them the role elasticsearch-operator.

As the user which was specified as ELASTICSEARCH_ADMIN_USER on previous step:

Make sure the secret with Elasticsearch certificates exists and is named <elasticsearch_cluster_name>-certs

Then process the following template:

$ oc process -f deploy/openshift/elasticsearch-template.yaml | oc apply -f -

The template deploys the Custom Resource and the operator deployment. You can pass the following variables to the template:

  • NAMESPACE - namespace where the Elasticsearch cluster will be deployed. Must be the same as the one specified by admin
  • ELASTICSEARCH_CLUSTER_NAME - name of the Elasticsearch cluster to be deployed

For example:

$ oc process NAMESPACE=myproject ELASTICSEARCH_CLUSTER_NAME=elastic1 -f deploy/openshift/elasticsearch-template.yaml | oc apply -f -

Customize your cluster

Image customization

The operator is designed to work with openshift/origin-aggregated-logging image.

Storage configuration

Storage is configurable per individual node type. Possible configuration options:

  • Hostmounted directory
  • Empty directory
  • Existing PersistentVolume
  • New PersistentVolume generated by StorageClass

Elasticsearch cluster topology customization

Decide how many nodes you want to run.

Elasticsearch node configuration customization

TODO

Supported features

Kubernetes TBD+ and OpenShift TBD+ are supported.

  • SSL-secured deployment (using Searchguard)
  • Insecure deployment (requires different image)
  • Index per tenant
  • Logging to a file or to console
  • Elasticsearch 6.x support
  • Elasticsearch 5.6.x support
  • Master role
  • Client role
  • Data role
  • Clientdata role
  • Clientdatamaster role
  • Elasticsearch snapshots
  • Prometheus monitoring
  • Status monitoring
  • Rolling restarts

Testing

In a real deployment OpenShift monitoring will be installed. However for testing purposes, you should install the monitoring CRDs:

oc create -n openshift-logging -f \
https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheusrule.crd.yaml
oc create -n openshift-logging -f \
https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/servicemonitor.crd.yaml

E2E Testing

To run the e2e tests, from the repo directory, run:

sudo sysctl -w vm.max_map_count=262144
imagebuilder -t quay.io/openshift/elasticsearch-operator .
operator-sdk test local --namespace openshift-logging ./test/e2e/

Dev Testing

To set up your local environment based on what will be provided by OLM, run:

sudo sysctl -w vm.max_map_count=262144
ELASTICSEARCH_OPERATOR=$GOPATH/src/github.com/openshift/elasticsearch-operator

oc create -n openshift-logging -f $ELASTICSEARCH_OPERATOR/deploy/service_account.yaml
oc create -n openshift-logging -f $ELASTICSEARCH_OPERATOR/deploy/role.yaml
oc create -n openshift-logging -f $ELASTICSEARCH_OPERATOR/deploy/role_binding.yaml
oc create -n openshift-logging -f $ELASTICSEARCH_OPERATOR/deploy/crds/crd.yaml

oc create -n openshift-logging -f $ELASTICSEARCH_OPERATOR/deploy/cr.yaml

To test on an OCP cluster, you can run:

ALERTS_FILE_PATH=files/prometheus_alerts.yml RULES_FILE_PATH=files/prometheus_rules.yml \
OPERATOR_NAME=elasticsearch-operator WATCH_NAMESPACE=openshift-logging \
KUBERNETES_CONFIG=/etc/origin/master/admin.kubeconfig \
go run cmd/elasticsearch-operator/main.go

To remove created API objects: oc delete Elasticsearch elasticsearch -n openshift-logging

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 90.0%
  • Shell 7.7%
  • Makefile 1.2%
  • Dockerfile 1.1%