Skip to content

Decision: Service discovery and exposition #54

@siegfriedweber

Description

@siegfriedweber

Description

Most Stackable operators deploy a discovery ConfigMap with the same name as the cluster name. These ConfigMaps contain connection strings or configuration files tailored to the product clients.

The OpenSearch documentation lists some clients, see https://docs.opensearch.org/latest/clients/. In the example of the Python client, a list of OpenSearch hosts can be given. In the example of the Java client, only one OpenSearch host can be set. There is no common connection string. Some clients use the full URL (e.g. Go) and some require separate values for the protocol, hostname and port (e.g. Python).

Part of #52

Solution in the OpenSearch Helm chart

The Helm chart does not provide a solution. If nodes with different roles should be deployed, then separate Helm deployments are necessary. The pods of the different Helm deployments form an OpenSearch cluster because the masterService is defined in every deployment. But there is no Service or ConfigMap for the whole cluster. There are only services for each deployment.

Solution in the OpenSearch Kubernetes Operator

The operator deploys a Service which references all pods, and a Service for each node pool. It is not possible to exclude node pools from the Service.

Proposed solution for the Stackable OpenSearch operator

There are already role-group services backed by ListenerClasses and there is a "discovery" service which references all pods with the node role "cluster_manager". The role-group services can stay. A service as entrypoint to the whole cluster is also preferrable, especially if only one service can be set in a client. There is also no straight-forward way to define a bunch of services in a ConfigMap (HOST1: host1, HOST2: host2 or HOSTS: host1,host2). The current discovery service is "cluster-internal" and used for the initialization of the cluster. It is not backed by a ListenerClass and it references pods with a pre-defined node role. This service must be renamed, e.g. to <cluster-name>-discovery, as it is the case in the other operator. The new service that is meant as the entrypoint, should be named <cluster-name>-<role-name>, backed by a ListenerClass, and should reference role-groups selected by the user and not hard-coded node roles. For instance, the example in the OpenSearch documentation uses a coordinating node instead of a cluster_manager node as an entrypoint to the cluster.

---
apiVersion: opensearch.stackable.tech/v1alpha1
kind: OpenSearchCluster
spec:
  nodes:
    roleConfig:
      listenerClass: <string> # ListenerClass for the new role service; defaults to "cluster-internal"
    roleGroups:
      role-group:
        expose: <boolean> # Determines if this role-group is exposed in the role service;
                          # defaults to "true"

The ListenerClass can be specified at the role level. There is only the role "nodes", so it could also be defined at the cluster level, but if OpenSearch Dashboards is ever added to this operator and not a separate one, then it will have its own role and the service for the OpenSearch backend should not cover the frontend as well.

The discovery ConfigMap would then be named <cluster-name>-<role-name>, e.g. opensearch-nodes:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: opensearch-nodes
data:
  protocol: <string> # e.g. "https"
  host: <string> # e.g. "opensearch-nodes.default.svc.cluster.local" or whatever is determined
                 # by the listener-operator
  port: <number> # e.g. "9200"

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions