Skip to content

Latest commit

 

History

History
509 lines (383 loc) · 20 KB

user-guide.md

File metadata and controls

509 lines (383 loc) · 20 KB

User Guide

This guide walks through an example of building a simple memcached-operator using the operator-sdk CLI tool and controller-runtime library API. To learn how to use Ansible or Helm to create an operator, see the Ansible Operator User Guide or the Helm Operator User Guide. The rest of this document will show how to program an operator in Go.

Prerequisites

  • dep version v0.5.0+.
  • git
  • go version v1.10+.
  • docker version 17.03+.
  • kubectl version v1.11.3+.
  • Access to a Kubernetes v1.11.3+ cluster.

Note: This guide uses minikube version v0.25.0+ as the local Kubernetes cluster and quay.io for the public registry.

Install the Operator SDK CLI

The Operator SDK has a CLI tool that helps the developer to create, build, and deploy a new operator project.

Checkout the desired release tag and install the SDK CLI tool:

$ mkdir -p $GOPATH/src/github.com/operator-framework
$ cd $GOPATH/src/github.com/operator-framework
$ git clone https://github.com/operator-framework/operator-sdk
$ cd operator-sdk
$ git checkout master
$ make dep
$ make install

This installs the CLI binary operator-sdk at $GOPATH/bin.

Alternatively, if you are using Homebrew, you can install the SDK CLI tool with the following command:

$ brew install operator-sdk

Create a new project

Use the CLI to create a new memcached-operator project:

$ mkdir -p $GOPATH/src/github.com/example-inc/
$ cd $GOPATH/src/github.com/example-inc/
$ operator-sdk new memcached-operator
$ cd memcached-operator

To learn about the project directory structure, see project layout doc.

Operator scope

A namespace-scoped operator (the default) watches and manages resources in a single namespace, whereas a cluster-scoped operator watches and manages resources cluster-wide. Namespace-scoped operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions. However, there are use cases where a cluster-scoped operator may make sense. For example, the cert-manager operator is often deployed with cluster-scoped permissions and watches so that it can manage issuing certificates for an entire cluster.

If you'd like to create your memcached-operator project to be cluster-scoped use the following operator-sdk new command instead:

$ operator-sdk new memcached-operator --cluster-scoped

Using --cluster-scoped will scaffold the new operator with the following modifications:

  • deploy/operator.yaml - Set WATCH_NAMESPACE="" instead of setting it to the pod's namespace
  • deploy/role.yaml - Use ClusterRole instead of Role
  • deploy/role_binding.yaml:
    • Use ClusterRoleBinding instead of RoleBinding
    • Use ClusterRole instead of Role for roleRef
    • Set the subject namespace to REPLACE_NAMESPACE. This must be changed to the namespace in which the operator is deployed.

Manager

The main program for the operator cmd/manager/main.go initializes and runs the Manager.

The Manager will automatically register the scheme for all custom resources defined under pkg/apis/... and run all controllers under pkg/controller/....

The Manager can restrict the namespace that all controllers will watch for resources:

mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})

By default this will be the namespace that the operator is running in. To watch all namespaces leave the namespace option empty:

mgr, err := manager.New(cfg, manager.Options{Namespace: ""})

Add a new Custom Resource Definition

Add a new Custom Resource Definition(CRD) API called Memcached, with APIVersion cache.example.com/v1alpha1 and Kind Memcached.

$ operator-sdk add api --api-version=cache.example.com/v1alpha1 --kind=Memcached

This will scaffold the Memcached resource API under pkg/apis/cache/v1alpha1/....

Define the spec and status

Modify the spec and status of the Memcached Custom Resource(CR) at pkg/apis/cache/v1alpha1/memcached_types.go:

type MemcachedSpec struct {
	// Size is the size of the memcached deployment
	Size int32 `json:"size"`
}
type MemcachedStatus struct {
	// Nodes are the names of the memcached pods
	Nodes []string `json:"nodes"`
}

After modifying the *_types.go file always run the following command to update the generated code for that resource type:

$ operator-sdk generate k8s

Add a new Controller

Add a new Controller to the project that will watch and reconcile the Memcached resource:

$ operator-sdk add controller --api-version=cache.example.com/v1alpha1 --kind=Memcached

This will scaffold a new Controller implementation under pkg/controller/memcached/....

For this example replace the generated Controller file pkg/controller/memcached/memcached_controller.go with the example memcached_controller.go implementation.

The example Controller executes the following reconciliation logic for each Memcached CR:

  • Create a memcached Deployment if it doesn't exist
  • Ensure that the Deployment size is the same as specified by the Memcached CR spec
  • Update the Memcached CR status using the status writer with the names of the memcached pods

The next two subsections explain how the Controller watches resources and how the reconcile loop is triggered. Skip to the Build section to see how to build and run the operator.

Resources watched by the Controller

Inspect the Controller implementation at pkg/controller/memcached/memcached_controller.go to see how the Controller watches resources.

The first watch is for the Memcached type as the primary resource. For each Add/Update/Delete event the reconcile loop will be sent a reconcile Request (a namespace/name key) for that Memcached object:

err := c.Watch(
  &source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{})

The next watch is for Deployments but the event handler will map each event to a reconcile Request for the owner of the Deployment. Which in this case is the Memcached object for which the Deployment was created. This allows the controller to watch Deployments as a secondary resource.

err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{
    IsController: true,
    OwnerType:    &cachev1alpha1.Memcached{},
  })

// TODO: Doc on eventhandler, arbitrary mapping between watched and reconciled resource.

// TODO: Doc on configuring a Controller: number of workers, predicates, watching channels,

Reconcile loop

Every Controller has a Reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument which is a Namespace/Name key used to lookup the primary resource object, Memcached, from the cache:

func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) {
  // Lookup the Memcached instance for this reconcile request
  memcached := &cachev1alpha1.Memcached{}
  err := r.client.Get(context.TODO(), request.NamespacedName, memcached)
  ...
}  

Based on the return values, Result and error, the Request may be requeued and the reconcile loop may be triggered again:

// Reconcile successful - don't requeue
return reconcile.Result{}, nil
// Reconcile failed due to error - requeue
return reconcile.Result{}, err
// Requeue for any reason other than error
return reconcile.Result{Requeue: true}, nil

You can set the Result.RequeueAfter to requeue the Request after a grace period as well:

import "time"

// Reconcile for any reason than error after 5 seconds
return reconcile.Result{RequeueAfter: time.Second*5}, nil

Note: Returning Result with RequeueAfter set is how you can periodically reconcile a CR.

For a guide on Reconcilers, Clients, and interacting with resource Events, see the Client API doc.

Build and run the operator

Before running the operator, the CRD must be registered with the Kubernetes apiserver:

$ kubectl create -f deploy/crds/cache_v1alpha1_memcached_crd.yaml

Once this is done, there are two ways to run the operator:

  • As a Deployment inside a Kubernetes cluster
  • As Go program outside a cluster

1. Run as a Deployment inside the cluster

Build the memcached-operator image and push it to a registry:

$ operator-sdk build quay.io/example/memcached-operator:v0.0.1
$ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
$ docker push quay.io/example/memcached-operator:v0.0.1

If you created your operator using --cluster-scoped=true, update the service account namespace in the generated ClusterRoleBinding to match where you are deploying your operator.

$ export OPERATOR_NAMESPACE=$(kubectl config view --minify -o jsonpath='{.contexts[0].context.namespace}')
$ sed -i "s|REPLACE_NAMESPACE|$OPERATOR_NAMESPACE|g" deploy/role_binding.yaml

Note If you are performing these steps on OSX, use the following commands instead:

$ sed -i "" 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
$ sed -i "" "s|REPLACE_NAMESPACE|$OPERATOR_NAMESPACE|g" deploy/role_binding.yaml

The Deployment manifest is generated at deploy/operator.yaml. Be sure to update the deployment image as shown above since the default is just a placeholder.

Setup RBAC and deploy the memcached-operator:

$ kubectl create -f deploy/service_account.yaml
$ kubectl create -f deploy/role.yaml
$ kubectl create -f deploy/role_binding.yaml
$ kubectl create -f deploy/operator.yaml

Verify that the memcached-operator is up and running:

$ kubectl get deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
memcached-operator       1         1         1            1           1m

2. Run locally outside the cluster

This method is preferred during development cycle to deploy and test faster.

Set the name of the operator in an environment variable:

export OPERATOR_NAME=memcached-operator

Run the operator locally with the default Kubernetes config file present at $HOME/.kube/config:

$ operator-sdk up local --namespace=default
2018/09/30 23:10:11 Go Version: go1.10.2
2018/09/30 23:10:11 Go OS/Arch: darwin/amd64
2018/09/30 23:10:11 operator-sdk Version: 0.0.6+git
2018/09/30 23:10:12 Registering Components.
2018/09/30 23:10:12 Starting the Cmd.

You can use a specific kubeconfig via the flag --kubeconfig=<path/to/kubeconfig>.

Create a Memcached CR

Create the example Memcached CR that was generated at deploy/crds/cache_v1alpha1_memcached_cr.yaml:

$ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
  name: "example-memcached"
spec:
  size: 3

$ kubectl apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml

Ensure that the memcached-operator creates the deployment for the CR:

$ kubectl get deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
memcached-operator       1         1         1            1           2m
example-memcached        3         3         3            3           1m

Check the pods and CR status to confirm the status is updated with the memcached pod names:

$ kubectl get pods
NAME                                  READY     STATUS    RESTARTS   AGE
example-memcached-6fd7c98d8-7dqdr     1/1       Running   0          1m
example-memcached-6fd7c98d8-g5k7v     1/1       Running   0          1m
example-memcached-6fd7c98d8-m7vn7     1/1       Running   0          1m
memcached-operator-7cc7cfdf86-vvjqk   1/1       Running   0          2m
$ kubectl get memcached/example-memcached -o yaml
apiVersion: cache.example.com/v1alpha1
kind: Memcached
metadata:
  clusterName: ""
  creationTimestamp: 2018-03-31T22:51:08Z
  generation: 0
  name: example-memcached
  namespace: default
  resourceVersion: "245453"
  selfLink: /apis/cache.example.com/v1alpha1/namespaces/default/memcacheds/example-memcached
  uid: 0026cc97-3536-11e8-bd83-0800274106a1
spec:
  size: 3
status:
  nodes:
  - example-memcached-6fd7c98d8-7dqdr
  - example-memcached-6fd7c98d8-g5k7v
  - example-memcached-6fd7c98d8-m7vn7

Update the size

Change the spec.size field in the memcached CR from 3 to 4 and apply the change:

$ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
  name: "example-memcached"
spec:
  size: 4

$ kubectl apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml

Confirm that the operator changes the deployment size:

$ kubectl get deployment
NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
example-memcached    4         4         4            4           5m

Cleanup

Clean up the resources:

$ kubectl delete -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
$ kubectl delete -f deploy/operator.yaml
$ kubectl delete -f deploy/role_binding.yaml
$ kubectl delete -f deploy/role.yaml
$ kubectl delete -f deploy/service_account.yaml

Advanced Topics

Adding 3rd Party Resources To Your Operator

The operator's Manager supports the Core Kubernetes resource types as found in the client-go scheme package and will also register the schemes of all custom resource types defined in your project under pkg/apis.

import (
  "github.com/example-inc/memcached-operator/pkg/apis"
  ...
)
// Setup Scheme for all resources
if err := apis.AddToScheme(mgr.GetScheme()); err != nil {
  log.Error(err, "")
  os.Exit(1)
}

To add a 3rd party resource to an operator, you must add it to the Manager's scheme. By creating an AddToScheme method or reusing one you can easily add a resource to your scheme. An example shows that you define a function and then use the runtime package to create a SchemeBuilder.

Register with the Manager's scheme

Call the AddToScheme() function for your 3rd party resource and pass it the Manager's scheme via mgr.GetScheme().

Example:

import (
    ....
    routev1 "github.com/openshift/api/route/v1"
)

func main() {
    ....
    if err := routev1.AddToScheme(mgr.GetScheme()); err != nil {
      log.Error(err, "")
      os.Exit(1)
    }
    ....
}

After adding new import paths to your operator project, run dep ensure in the root of your project directory to fulfill these dependencies.

Handle Cleanup on Deletion

To implement complex deletion logic, you can add a finalizer to your Custom Resource. This will prevent your Custom Resource from being deleted until you remove the finalizer (ie, after your cleanup logic has successfully run). For more information, see the official Kubernetes documentation on finalizers.

Metrics

To learn about how metrics work in the Operator SDK read the metrics section of the user documentation.

Leader election

During the lifecycle of an operator it's possible that there may be more than 1 instance running at any given time e.g when rolling out an upgrade for the operator. In such a scenario it is necessary to avoid contention between multiple operator instances via leader election so that only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.

There are two different leader election implementations to choose from, each with its own tradeoff.

  • Leader-for-life: The leader pod only gives up leadership (via garbage collection) when it is deleted. This implementation precludes the possibility of 2 instances mistakenly running as leaders (split brain). However, this method can be subject to a delay in electing a new leader. For instance when the leader pod is on an unresponsive or partitioned node, the pod-eviction-timeout dictates how it takes for the leader pod to be deleted from the node and step down (default 5m).
  • Leader-with-lease: The leader pod periodically renews the leader lease and gives up leadership when it can't renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations.

By default the SDK enables the leader-for-life implementation. However you should consult the docs above for both approaches to consider the tradeoffs that make sense for your use case.

The following examples illustrate how to use the two options:

Leader for life

A call to leader.Become() will block the operator as it retries until it can become the leader by creating the configmap named memcached-operator-lock.

import (
  ...
  "github.com/operator-framework/operator-sdk/pkg/leader"
)

func main() {
  ...
  err = leader.Become(context.TODO(), "memcached-operator-lock")
  if err != nil {
    log.Error(err, "Failed to retry for leader lock")
    os.Exit(1)
  }
  ...
}

If the operator is not running inside a cluster leader.Become() will simply return without error to skip the leader election since it can't detect the operator's namespace.

Leader with lease

The leader-with-lease approach can be enabled via the Manager Options for leader election.

import (
  ...
  "sigs.k8s.io/controller-runtime/pkg/manager"
)

func main() {
  ...
  opts := manager.Options{
    ...
    LeaderElection: true,
    LeaderElectionID: "memcached-operator-lock"
  }
  mgr, err := manager.New(cfg, opts)
  ...
}

When the operator is not running in a cluster, the Manager will return an error on starting since it can't detect the operator's namespace in order to create the configmap for leader election. You can override this namespace by setting the Manager's LeaderElectionNamespace option.