Skip to content

Latest commit

 

History

History
368 lines (323 loc) · 11.9 KB

openshift.asciidoc

File metadata and controls

368 lines (323 loc) · 11.9 KB

Deploy ECK on OpenShift

This page shows how to run ECK on OpenShift.

Warning
Some Docker images are incompatible with the restricted SCC. This is the case for the APM Server before 7.9 and for Enterprise Search 7.9 and 7.10. You can use this workaround to run those images with the anyuid SCC.

Before you begin

  1. To run the instructions on this page, you must be a system:admin user or a user with the privileges to create Projects, CRDs, and RBAC resources at the cluster level.

  2. Set virtual memory settings on the Kubernetes nodes.

    Before deploying an Elasticsearch cluster with ECK, make sure that the Kubernetes nodes in your cluster have the correct vm.max_map_count sysctl setting applied. By default, Pods created by ECK are likely to run with the restricted Security Context Constraint (SCC) which restricts privileged access required to change this setting in the underlying Kubernetes nodes.

    Alternatively, you can opt for setting node.store.allow_mmap: false at the Elasticsearch node configuration level. This has performance implications and is not recommended for production workloads.

    For more information, check [{p}-virtual-memory].

Deploy the operator

  1. Apply the all-in-one template, as described in the quickstart.

    oc create -f https://download.elastic.co/downloads/eck/{eck_version}/crds.yaml
    oc apply -f https://download.elastic.co/downloads/eck/{eck_version}/operator.yaml
  2. [Optional] If the Software Defined Network is configured with the ovs-multitenant plug-in, you must allow the elastic-system namespace to access other Pods and Services in the cluster:

    oc adm pod-network make-projects-global elastic-system
  3. Create a namespace to hold the Elastic resources ({eck_resources_list}):

    oc new-project elastic # creates the elastic project
  4. [Optional] Allow another user or a group of users to manage the Elastic resources:

    oc adm policy add-role-to-user elastic-operator developer -n elastic

    In this example the user developer is allowed to manage Elastic resources in the namespace elastic.

Deploy an Elasticsearch instance with a route

Use the following code to create an Elasticsearch cluster elasticsearch-sample and a "passthrough" route to access it:

cat <<EOF | oc apply -n elastic -f -
# This sample sets up an Elasticsearch cluster with an OpenShift route
apiVersion: elasticsearch.k8s.elastic.co/{eck_crd_version}
kind: Elasticsearch
metadata:
  name: elasticsearch-sample
spec:
  version: {version}
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: elasticsearch-sample
spec:
  #host: elasticsearch.example.com # override if you don't want to use the host that is automatically generated by OpenShift ([-].)
  tls:
    termination: passthrough # Elasticsearch is the TLS endpoint
    insecureEdgeTerminationPolicy: Redirect
  to:
    kind: Service
    name: elasticsearch-sample-es-http
EOF

Elasticsearch plugins

Elasticsearch plugins cannot be installed at runtime in most OpenShift environments. This is because the plugin installer must run as root, but Elasticsearch is restricted from running as root. To add plugins to Elasticsearch, you can use custom images as described in [{p}-custom-images].

Deploy a Kibana instance with a route

Use the following code to create a Kibana instance and a "passthrough" route to access it:

cat <<EOF | oc apply -n elastic -f -
apiVersion: kibana.k8s.elastic.co/{eck_crd_version}
kind: Kibana
metadata:
  name: kibana-sample
spec:
  version: {version}
  count: 1
  elasticsearchRef:
    name: "elasticsearch-sample"
  podTemplate:
    spec:
      containers:
      - name: kibana
        resources:
          limits:
            memory: 1Gi
            cpu: 1
---
apiVersion: v1
kind: Route
metadata:
  name: kibana-sample
spec:
  #host: kibana.example.com # override if you don't want to use the host that is automatically generated by OpenShift ([-].)
  tls:
    termination: passthrough # Kibana is the TLS endpoint
    insecureEdgeTerminationPolicy: Redirect
  to:
    kind: Service
    name: kibana-sample-kb-http
EOF

Use the following command to get the hosts of each Route:

oc get route -n elastic

Deploy Docker images with anyuid SCC

Starting with version 7.9, it is possible to run the APM Server with the restricted SCC. For APM versions older than 7.9 and Enterprise Search version 7.9, you can use this workaround which allows the Pod to run with the default uid 1000 by assigning it to the anyuid SCC:

  1. Create a service account to run the APM Server:

    oc create serviceaccount apm-server -n elastic
  2. Add the APM service account to the anyuid SCC:

    oc adm policy add-scc-to-user anyuid -z apm-server -n elastic
    scc "anyuid" added to: ["system:serviceaccount:elastic:apm-server"]
  3. Deploy an APM Server and a Route with the following manifest:

    cat <<EOF | oc apply -n elastic -f -
    apiVersion: apm.k8s.elastic.co/{eck_crd_version}
    kind: ApmServer
    metadata:
      name: apm-server-sample
    spec:
      version: {version}
      count: 1
      elasticsearchRef:
        name: "elasticsearch-sample"
      podTemplate:
        spec:
          serviceAccountName: apm-server
    ---
    apiVersion: v1
    kind: Route
    metadata:
      name: apm-server-sample
    spec:
      #host: apm-server.example.com # override if you don't want to use the host that is automatically generated by OpenShift ([-].)
      tls:
        termination: passthrough # the APM Server is the TLS endpoint
        insecureEdgeTerminationPolicy: Redirect
      to:
        kind: Service
        name: apm-server-sample-apm-http
    EOF

    To check that the Pod of the APM Server is using the correct SCC, use the following command:

    oc get pod -o go-template='{{range .items}}{{$scc := index .metadata.annotations "openshift.io/scc"}}{{.metadata.name}}{{" scc:"}}{{range .spec.containers}}{{$scc}}{{" "}}{{"\n"}}{{end}}{{end}}'
    apm-server-sample-apm-server-86bfc5c95c-96lbx scc:anyuid
    elasticsearch-sample-es-5tsqghmm79 scc:restricted
    elasticsearch-sample-es-6qk52mz5jk scc:restricted
    elasticsearch-sample-es-dg4vvpm2mr scc:restricted
    kibana-sample-kb-97c6b6b8d-lqfd2 scc:restricted

Grant privileged permissions to Beats

Deploying Beats on Openshift may require some privileged permissions. This section describes how to create a ServiceAccount, add the ServiceAccount to the privileged SCC, and use it to run Beats.

The following example assumes that Beats is deployed in the Namespace elastic with the ServiceAccount heartbeat. You can replace these values according to your environment.

Note
If you used the examples from the recipes directory, the ServiceAccount may already exist.
  1. Create a dedicated ServiceAccount:

    oc create serviceaccount heartbeat -n elastic
  2. Add the ServiceAccount to the required SCC:

    oc adm policy add-scc-to-user privileged -z heartbeat -n elastic
  3. Update the Beat manifest to use the new ServiceAccount, for example:

    apiVersion: beat.k8s.elastic.co/v1beta1
    kind: Beat
    metadata:
      name: heartbeat
    spec:
      type: heartbeat
      version: {version}
      elasticsearchRef:
        name: elasticsearch
      config:
        heartbeat.monitors:
        - type: tcp
          schedule: '@every 5s'
          hosts: ["elasticsearch-es-http.default.svc:9200"]
        - type: tcp
          schedule: '@every 5s'
          hosts: ["kibana-kb-http.default.svc:5601"]
      deployment:
        replicas: 1
        podTemplate:
          spec:
            serviceAccountName: heartbeat
            securityContext:
              runAsUser: 0

If SELinux is enabled, the Beat Pod might fail with the following message:

Exiting: Failed to create Beat meta file: open /usr/share/heartbeat/data/meta.json.new: permission denied

To fix this error, apply the label svirt_sandbox_file_t to the directory /var/lib/elastic/heartbeat/heartbeat-data/ on the Kubernetes node:

chcon -Rt svirt_sandbox_file_t /var/lib/elastic/heartbeat/heartbeat-data/

Repeat this step on all the hosts where the heartbeat Pod can be deployed.

Some Beats may require additional permissions. For example, Filebeat needs additional privileges to read other container logs on the host. In this case, you can use the privileged field in the security context of the container spec:

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: filebeat
spec:
  type: filebeat
...
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: filebeat
        automountServiceAccountToken: true
...
        containers:
        - name: filebeat
          securityContext:
            runAsUser: 0
            privileged: true # This is required to access other containers logs
          volumeMounts:
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
        volumes:
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers

Check the complete examples in the recipes directory.

Grant host access permission to Elastic Agent

Deploying Elastic Agent on Openshift may require additional permissions depending on the type of integration Elastic Agent is supposed to run. In any case, Elastic Agent uses a hostPath volume as its data directory on OpenShift to maintain a stable identity. Therefore, the Service Account used for Elastic Agent needs permissions to use hostPath volumes.

The following example assumes that Elastic Agent is deployed in the Namespace elastic with the ServiceAccount elastic-agent. You can replace these values according to your environment.

Note
If you used the examples from the recipes directory, the ServiceAccount may already exist.
  1. Create a dedicated ServiceAccount:

    oc create serviceaccount elastic-agent -n elastic
  2. Add the ServiceAccount to the required SCC:

    oc adm policy add-scc-to-user hostaccess -z elastic-agent -n elastic
  3. Update the Elastic Agent manifest to use the new ServiceAccount, for example:

    apiVersion: agent.k8s.elastic.co/v1alpha1
    kind: Agent
    metadata:
      name: my-agent
    spec:
      version: {version}
      daemonSet:
        podTemplate:
          spec:
            serviceAccountName: elastic-agent