From 89fbb953182b21ef03833b9a4be9d560943d6249 Mon Sep 17 00:00:00 2001 From: Camila Macedo Date: Wed, 6 May 2020 17:22:25 +0100 Subject: [PATCH 1/3] doc: add cluster-scope for kb layout --- website/content/en/docs/crds-scope.md | 92 ++++++ website/content/en/docs/faq.md | 2 +- .../content/en/docs/kubebuilder/crds-scope.md | 95 ++++++ .../en/docs/kubebuilder/operator-scope.md | 299 ++++++++++++++++++ website/content/en/docs/operator-scope.md | 85 +---- website/content/en/docs/versioning.md | 2 +- 6 files changed, 493 insertions(+), 82 deletions(-) create mode 100644 website/content/en/docs/crds-scope.md create mode 100644 website/content/en/docs/kubebuilder/crds-scope.md create mode 100644 website/content/en/docs/kubebuilder/operator-scope.md diff --git a/website/content/en/docs/crds-scope.md b/website/content/en/docs/crds-scope.md new file mode 100644 index 00000000000..08cb3499619 --- /dev/null +++ b/website/content/en/docs/crds-scope.md @@ -0,0 +1,92 @@ +--- +title: CRD scope with Operator SDK +linkTitle: CRD Scope +weight: 60 +--- + +## Overview + +The CustomResourceDefinition (CRD) scope can also be changed for cluster-scoped operators so that there is only a single +instance (for a given name) of the CRD to manage across the cluster. + +**NOTE**: Cluster-scoped CRDs are **NOT** supported with the Helm operator. While Helm releases can create +cluster-scoped resources, Helm's design requires the release itself to be created in a specific namespace. Since the +Helm operator uses a 1-to-1 mapping between a CR and a Helm release, Helm's namespace-scoped release requirement +extends to Helm operator's namespace-scoped CR requirement. + +For each CRD that needs to be cluster-scoped, update its manifest to be cluster-scoped. + +* `deploy/crds/__crd.yaml` + * Set `spec.scope: Cluster` + +To ensure that the CRD is always generated with `scope: Cluster`, add the marker +`// +kubebuilder:resource:path=,scope=Cluster`, or if already present replace `scope={Namespaced -> Cluster}`, +above the CRD's Go type definition in `pkg/apis///_types.go`. Note that the `` +element must be the same lower-case plural value of the CRD's Kind, `spec.names.plural`. + +## CRD cluster-scoped usage + +A cluster scope is ideal for operators that manage custom resources (CR's) that can be created in more than one namespace in a cluster. + +**NOTE**: When a `Manager` instance is created in the `main.go` file, it receives the namespace(s) as Options. +These namespace(s) should be watched and cached for the Client which is provided by the Controllers. Only clients +provided by cluster-scoped projects where the `Namespace` attribute is `""` will be able to manage cluster-scoped CRD's. +For more information see the [Manager][manager_user_guide] topic in the user guide and the +[Manager Options][manager_options]. + +## Example for changing the CRD scope from Namespaced to Cluster + +The following example is for Go based-operators. `scope: Cluster` must set manually for Helm and Ansible based-operators. + +- Check the `spec.names.plural` in the CRD's Kind YAML file + +* `deploy/crds/cache_v1alpha1_memcached_crd.yaml` + ```YAML + apiVersion: apiextensions.k8s.io/v1beta1 + kind: CustomResourceDefinition + metadata: + name: memcacheds.cache.example.com + spec: + group: cache.example.com + names: + kind: Memcached + listKind: MemcachedList + plural: memcacheds + singular: memcached + scope: Namespaced + ``` + +- Update the `pkg/apis///_types.go` by adding the +marker `// +kubebuilder:resource:path=,scope=Cluster` + +* `pkg/apis/cache/v1alpha1/memcached_types.go` + ```Go + // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object + + // Memcached is the Schema for the memcacheds API + // +kubebuilder:resource:path=memcacheds,scope=Cluster + type Memcached struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec MemcachedSpec `json:"spec,omitempty"` + Status MemcachedStatus `json:"status,omitempty"` + } + ``` +- Execute the command `operator-sdk generate crds`, then you should be able to check that the CRD was updated with the cluster scope as in the following example: + +* `deploy/crds/cache.example.com_memcacheds_crd.yaml` + ```YAML + apiVersion: apiextensions.k8s.io/v1beta1 + kind: CustomResourceDefinition + metadata: + name: memcacheds.cache.example.com + spec: + group: cache.example.com + ... + scope: Cluster + ``` + +[RBAC]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +[manager_user_guide]: /docs/golang/quickstart/#manager +[manager_options]: https://godoc.org/github.com/kubernetes-sigs/controller-runtime/pkg/manager#Options diff --git a/website/content/en/docs/faq.md b/website/content/en/docs/faq.md index 63fce574c83..b034888f668 100644 --- a/website/content/en/docs/faq.md +++ b/website/content/en/docs/faq.md @@ -1,7 +1,7 @@ --- title: Operator SDK FAQ linkTitle: FAQ -weight: 60 +weight: 80 --- ## Controller Runtime FAQ diff --git a/website/content/en/docs/kubebuilder/crds-scope.md b/website/content/en/docs/kubebuilder/crds-scope.md new file mode 100644 index 00000000000..f8da5a365de --- /dev/null +++ b/website/content/en/docs/kubebuilder/crds-scope.md @@ -0,0 +1,95 @@ +## Overview + +The CustomResourceDefinition (CRD) scope can also be changed for cluster-scoped operators so that there is only a single +instance (for a given name) of the CRD to manage across the cluster. + +The CRD manifests are generated in `config/crd/bases`. For each CRD that needs to be cluster-scoped, its manifest +should specify `spec.scope: Cluster`. + +To ensure that the CRD is always generated with `scope: Cluster`, add the marker +`// +kubebuilder:resource:path=,scope=Cluster`, or if already present replace `scope={Namespaced -> Cluster}`, +above the CRD's Go type definition in `api//_types.go` or `apis///_types.go` +if you are using the `multigroup` layout. Note that the `` +element must be the same lower-case plural value of the CRD's Kind, `spec.names.plural`. + +## CRD cluster-scoped usage + +A cluster scope is ideal for operators that manage custom resources (CR's) that can be created in more than +one namespace in a cluster. + +**NOTE**: When a `Manager` instance is created in the `main.go` file, it receives the namespace(s) as Options. +These namespace(s) should be watched and cached for the Client which is provided by the Controllers. Only clients +provided by cluster-scoped projects where the `Namespace` attribute is `""` will be able to manage cluster-scoped CRD's. +For more information see the [Manager][manager_user_guide] topic in the user guide and the +[Manager Options][manager_options]. + +## Example for changing the CRD scope from Namespaced to Cluster + +- Check the `spec.names.plural` in the CRD's Kind YAML file + +* `/config/crd/bases/cache.example.com_memcacheds.yaml` +```YAML +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.2.5 + creationTimestamp: null + name: memcacheds.cache.example.com +spec: + group: cache.example.com + names: + kind: Memcached + listKind: MemcachedList + plural: memcacheds + singular: memcached + scope: Namespaced + subresources: + status: {} +... +``` + +- Update the `apis//_types.go` by adding the +marker `// +kubebuilder:resource:path=,scope=Cluster` + +* `api/v1alpha1/memcached_types.go` + +```Go +// Memcached is the Schema for the memcacheds API +// +kubebuilder:resource:path=memcacheds,scope=Cluster +type Memcached struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec MemcachedSpec `json:"spec,omitempty"` + Status MemcachedStatus `json:"status,omitempty"` +} +``` +- Run `make manifests`, to update the CRD manifest with the cluster scope setting, as in the following example: + +* `/config/crd/bases/cache.example.com_memcacheds.yaml` + +```YAML +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.2.5 + creationTimestamp: null + name: memcacheds.cache.example.com +spec: + group: cache.example.com + names: + kind: Memcached + listKind: MemcachedList + plural: memcacheds + singular: memcached + scope: Cluster + subresources: + status: {} +... +``` + +[RBAC]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +[manager_user_guide]: /docs/golang/quickstart/#manager +[manager_options]: https://godoc.org/github.com/kubernetes-sigs/controller-runtime/pkg/manager#Options diff --git a/website/content/en/docs/kubebuilder/operator-scope.md b/website/content/en/docs/kubebuilder/operator-scope.md new file mode 100644 index 00000000000..62b8f06aad9 --- /dev/null +++ b/website/content/en/docs/kubebuilder/operator-scope.md @@ -0,0 +1,299 @@ +# Operator Scope + +## Overview + +A namespace-scoped operator watches and manages resources in a single Namespace, whereas a cluster-scoped operator + watches and manages resources cluster-wide. + +An operator should be cluster-scoped if it watches resources that can be created in any Namespace. An operator should +be namespace-scoped if it is intended to be flexibly deployed. This scope permits +decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions. + +By default, `operator-sdk init` scaffolds a cluster-scoped operator. This document details conversion of default +operator projects to namespaced-scoped operators. Before proceeding, be aware that your operator may be better suited +as cluster-scoped. For example, the [cert-manager][cert-manager] operator is often deployed with cluster-scoped +permissions and watches so that it can manage and issue certificates for an entire cluster. + +**IMPORTANT**: When a [Manager][ctrl-manager] instance is created in the `main.go` file, the +Namespaces are set via [Manager Options][ctrl-options] as described below. These Namespaces should be watched and +cached for the Client which is provided by the Manager.Only clients provided by cluster-scoped Managers are able +to manage cluster-scoped CRD's. For further information see: [CRD scope doc][crd-scope-doc]. + +## Watching resources in all Namespaces (default) + +A [Manager][ctrl-manager] is initialized with no Namespace option specified, or `Namespace: ""` will +watch all Namespaces: + +```go +... +mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ + Scheme: scheme, + MetricsBindAddress: metricsAddr, + Port: 9443, + LeaderElection: enableLeaderElection, + LeaderElectionID: "f1c5ece8.example.com", +}) +... +``` + +## Watching resources in a single Namespace + +To restrict the scope of the [Manager's][ctrl-manager] cache to a specific Namespace set the `Namespace` field +in [Options][ctrl-options]: + +```go +... +mgr, err = ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ + Scheme: scheme, + Namespace: "operator-namespace", + MetricsBindAddress: metricsAddr, +}) +... +``` + +## Watching resources in a set of Namespaces + +It is possible to use [`MultiNamespacedCacheBuilder`][multi-namespaced-cache-builder] from +[Options][ctrl-options] to watch and manage resources in a set of Namespaces: + +```go +... +namespaces := []string{"foo", "bar"} // List of Namespaces + +mgr, err = ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ + Scheme: scheme, + NewCache: cache.MultiNamespacedCacheBuilder(namespaces), + MetricsBindAddress: fmt.Sprintf("%s:%d", metricsHost, metricsPort), +}) +... +``` +In the above example, a CR created in a Namespace not in the set passed to `Options` will not be reconciled by +its controller because the [Manager][ctrl-manager] does not manage that Namespace. + +**IMPORTANT:** Note that this is not intended to be used for excluding Namespaces, this is better done via a Predicate. +Also note that you may face performance issues when managing a high number of Namespaces. + +## Restricting Roles and permissions + +An operator's scope defines its [Manager's][ctrl-manager] cache's scope but not the permissions to access the resources. +After updating the Manager's scope to be Namespaced, the cluster's [Role-Based Access Control (RBAC)][k8s-rbac] +permissions should be restricted accordingly. + +These permissions are found in the directory `config/rbac/`. The `ClusterRole` and `ClusterRoleBinding` manifests are +responsible for creating the permissions which allow the operator to have access to the resources. + +**NOTE** Only `ClusterRole` and `ClusterRoleBinding` manifests require changes to achieve this goal. +Ignore `_editor_role.yaml`, `_viewer_role.yaml`, and files a name pattern of `auth_proxy_*.yaml`. + +### Changing the permissions + +To change the scope of the RBAC permissions from cluster-wide to a specific namespace, you will need to use `Role`s +and `RoleBinding`s instead of `ClusterRole`s and `ClusterRoleBinding`s, respectively. + +[`RBAC markers`][rbac-markers] defined in the controller (e.g `controllers/memcached_controller.go`) +are used to generate the operator's [RBAC ClusterRole][rbac-clusterrole] (e.g `config/rbac/role.yaml`). The default + markers don't specify a `namespace` property and will result in a ClusterRole. + +Update the RBAC markers to specify a `namespace` property so that `config/rbac/role.yaml` is generated as a `Role` + instead of a `ClusterRole`. + +Replace: + +```go +// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch +``` + +With namespaced markers: + +```go +// +kubebuilder:rbac:groups=cache.example.com,namespace="my-namespace",resources=memcacheds,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups=cache.example.com,namespace="my-namespace",resources=memcacheds/status,verbs=get;update;patch +``` + +And then, run `make manifests` to update `config/rbac/role.yaml`: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: +... +``` + +We also need to update our `ClusterRoleBindings` to `RoleBindings` since they are not regenerated: + +```yaml + +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: manager-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: manager-role +subjects: +- kind: ServiceAccount + name: default + namespace: system +``` + + + +## Using environment variables for Namespace + +Instead of having any Namespaces hard-coded in the `main.go` file a good practice is to use environment +variables to allow the restrictive configurations + +### Configuring Namespace scoped operators + +- Add a helper function in the `main.go` file: + +```go +// getWatchNamespace returns the Namespace the operator should be watching for changes +func getWatchNamespace() (string, error) { + // WatchNamespaceEnvVar is the constant for env variable WATCH_NAMESPACE + // which is the Namespace where the watch activity happens. + // this value is empty if the operator is running with clusterScope. + var watchNamespaceEnvVar = "WATCH_NAMESPACE" + + ns, found := os.LookupEnv(watchNamespaceEnvVar) + if !found { + return "", fmt.Errorf("%s must be set", watchNamespaceEnvVar) + } + return ns, nil +} +``` + +- Use the environment variable value: + +```go +... +watchNamespace, err := getWatchNamespace() +if err != nil { + setupLog.Error(err, "unable to get WatchNamespace, " + + "the manager will watch and manage resources in all cluster") +} + +mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ + Scheme: scheme, + MetricsBindAddress: metricsAddr, + Port: 9443, + LeaderElection: enableLeaderElection, + LeaderElectionID: "f1c5ece8.example.com", + Namespace: watchNamespace, // namespaced-scope when the value is not an empty string +}) +... +``` + +- Define the environment variable in the `config/manager/manager.yaml`: + +```yaml +spec: + containers: + - command: + - /manager + args: + - --enable-leader-election + image: controller:latest + name: manager + resources: + limits: + cpu: 100m + memory: 30Mi + requests: + cpu: 100m + memory: 20Mi + env: + - name: WATCH_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + terminationGracePeriodSeconds: 10 +``` + +**NOTE** The above will set as value of `WATCH_NAMESPACE` the namespace where the operator will be deployed. + +### Configuring cluster-scoped operators with MultiNamespacedCacheBuilder + +- Add a helper function in the `main.go` file : + +```go +// getWatchNamespace returns the namespace the operator should be watching for changes +func getWatchNamespace() (string, error) { + // WatchNamespaceEnvVar is the constant for env variable WATCH_NAMESPACE + // which is the namespace where the watch activity happens. + // this value is empty if the operator is running with clusterScope. + var watchNamespaceEnvVar = "WATCH_NAMESPACE" + + ns, found := os.LookupEnv(watchNamespaceEnvVar) + if !found { + return "", fmt.Errorf("%s must be set", watchNamespaceEnvVar) + } + return ns, nil +} +``` + +- Use the environment variable value and check if it is an multi-namespace scenario: + +```go + ... +watchNamespace, err := getWatchNamespace() +if err != nil { + setupLog.Error(err, "unable to get WatchNamespace, " + + "the manager will watch and manage resources in all cluster") +} + +options := ctrl.Options{ + Scheme: scheme, + MetricsBindAddress: metricsAddr, + Port: 9443, + LeaderElection: enableLeaderElection, + LeaderElectionID: "f1c5ece8.example.com", + Namespace: watchNamespace, // namespaced-scope when the value is not an empty string +} + +// Add support for MultiNamespace set in WATCH_NAMESPACE (e.g ns1,ns2) +if strings.Contains(namespace, ",") { + setupLog.Infof("manager will be watching namespace %q", watchNamespace) + // configure cluster-scoped with MultiNamespacedCacheBuilder + options.Namespace = "" + options.NewCache = cache.MultiNamespacedCacheBuilder(strings.Split(watchNamespace, ",")) +} +... +``` + +- Define the environment variable in the `config/manager/manager.yaml`: + +```yaml +spec: + containers: + - command: + - /manager + args: + - --enable-leader-election + image: controller:latest + name: manager + resources: + limits: + cpu: 100m + memory: 30Mi + requests: + cpu: 100m + memory: 20Mi + env: + - name: WATCH_NAMESPACE + value: "ns1,ns2" + terminationGracePeriodSeconds: 10 +``` + +[cert-manager]: https://github.com/jetstack/cert-manager +[ctrl-manager]: https://godoc.org/sigs.k8s.io/controller-runtime/pkg/manager#Manager +[ctrl-options]: https://godoc.org/sigs.k8s.io/controller-runtime/pkg/manager#Options +[multi-namespaced-cache-builder]: https://godoc.org/github.com/kubernetes-sigs/controller-runtime/pkg/cache#MultiNamespacedCacheBuilder +[k8s-rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +[kube-rbac-proxy]: https://github.com/brancz/kube-rbac-proxy +[rbac-clusterrole]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole +[crd-scope-doc]: crds-scope +[rbac-markers]: https://book.kubebuilder.io/reference/markers/rbac.html \ No newline at end of file diff --git a/website/content/en/docs/operator-scope.md b/website/content/en/docs/operator-scope.md index ffdb1267e41..af5caac64fc 100644 --- a/website/content/en/docs/operator-scope.md +++ b/website/content/en/docs/operator-scope.md @@ -4,25 +4,19 @@ linkTitle: Operator Scope weight: 50 --- -- [Namespace-scoped operator usage](#namespace-scoped-operator-usage) -- [Cluster-scoped operator usage](#cluster-scoped-operator-usage) - - [Changes required for a cluster-scoped operator](#changes-required-for-a-cluster-scoped-operator) - - [Example for cluster-scoped operator](#example-for-cluster-scoped-operator) -- [CRD scope](#crd-scope) - - [CRD cluster-scoped usage](#crd-cluster-scoped-usage) - - [Example for changing the CRD scope from namespace to cluster](#example-for-changing-the-crd-scope-from-namespace-to-cluster) - ## Overview A namespace-scoped operator watches and manages resources in a single namespace, whereas a cluster-scoped operator watches and manages resources cluster-wide. Namespace-scoped operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions. However, there are use cases where a cluster-scoped operator may make sense. For example, the [cert-manager](https://github.com/jetstack/cert-manager) operator is often deployed with cluster-scoped permissions and watches so that it can manage issuing certificates for an entire cluster. +**NOTE**: CustomResourceDefinition (CRD) scope can also be changed to cluster-scoped. See the [CRD scope][crd-scope-doc] document for more details. + ## Namespace-scoped operator usage This scope is ideal for operator projects which will control resources just in one namespace, which is where the operator is deployed. -> **NOTE:** Initial projects created by `operator-sdk` are namespace-scoped by default which means that it will NOT have a `ClusterRole` defined in the `deploy/role_binding.yaml`. +**NOTE:** Projects created by `operator-sdk` are namespace-scoped by default which means that they will NOT have a `ClusterRole` defined in `deploy/`. ## Cluster-scoped operator usage @@ -43,6 +37,7 @@ The SDK scaffolds operators to be namespaced by default but with a few modificat * Set the subject namespace to the namespace in which the operator is deployed. * `deploy/service_account.yaml`: * Set `metadata.namespace` to the namespace where the operator is deployed. + ### Example for cluster-scoped operator @@ -100,78 +95,8 @@ With the above changes the specified manifests should look as follows: name: memcached-operator namespace: ``` - -## CRD scope - -Additionally the CustomResourceDefinition (CRD) scope can also be changed for cluster-scoped operators so that there is only a single instance (for a given name) of the CRD to manage across the cluster. - -> **NOTE**: Cluster-scoped CRDs are **NOT** supported with the Helm operator. While Helm releases can create cluster-scoped resources, Helm's design requires the release itself to be created in a specific namespace. Since the Helm operator uses a 1-to-1 mapping between a CR and a Helm release, Helm's namespace-scoped release requirement extends to Helm operator's namespace-scoped CR requirement. - -For each CRD that needs to be cluster-scoped, update its manifest to be cluster-scoped. - -* `deploy/crds/__crd.yaml` - * Set `spec.scope: Cluster` - -To ensure that the CRD is always generated with `scope: Cluster`, add the tag `// +kubebuilder:resource:path=,scope=Cluster`, or if already present replace `scope={Namespaced -> Cluster}`, above the CRD's Go type definition in `pkg/apis///_types.go`. Note that the `` element must be the same lower-case plural value of the CRD's Kind, `spec.names.plural`. - -### CRD cluster-scoped usage - -This scope is ideal for the cases where an instance(CR) of some Kind(CRD) will be used in more than one namespace instead of a specific one. - -> **NOTE**: When a `Manager` instance is created in the `main.go` file, it receives the namespace(s) as Options. These namespace(s) should be watched and cached for the Client which is provided by the Controllers. Only clients provided by cluster-scoped projects where the `Namespace` attribute is `""` will be able to manage cluster-scoped CRD's. For more information see the [Manager][manager_user_guide] topic in the user guide and the [Manager Options][manager_options]. - -### Example for changing the CRD scope from namespace to cluster - -The following example is for Go based-operators. Note that for Helm and Ansible based-operators the changes in the CRD required to be done manually. - -- Check the `spec.names.plural` in the CRD's Kind YAML file - -* `deploy/crds/cache_v1alpha1_memcached_crd.yaml` - ```YAML - apiVersion: apiextensions.k8s.io/v1beta1 - kind: CustomResourceDefinition - metadata: - name: memcacheds.cache.example.com - spec: - group: cache.example.com - names: - kind: Memcached - listKind: MemcachedList - plural: memcacheds - singular: memcached - scope: Namespaced - ``` - -- Update the `pkg/apis///_types.go` by adding the tag `// +kubebuilder:resource:path=,scope=Cluster` - -* `pkg/apis/cache/v1alpha1/memcached_types.go` - ```Go - // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - - // Memcached is the Schema for the memcacheds API - // +kubebuilder:resource:path=memcacheds,scope=Cluster - type Memcached struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata,omitempty"` - - Spec MemcachedSpec `json:"spec,omitempty"` - Status MemcachedStatus `json:"status,omitempty"` - } - ``` -- Execute the command `operator-sdk generate crds`, then you should be able to check that the CRD was updated with the cluster scope as in the following example: - -* `deploy/crds/cache.example.com_memcacheds_crd.yaml` - ```YAML - apiVersion: apiextensions.k8s.io/v1beta1 - kind: CustomResourceDefinition - metadata: - name: memcacheds.cache.example.com - spec: - group: cache.example.com - ... - scope: Cluster - ``` [RBAC]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ [manager_user_guide]: /docs/golang/quickstart/#manager [manager_options]: https://godoc.org/github.com/kubernetes-sigs/controller-runtime/pkg/manager#Options +[crd-scope-doc]: /docs/crds-scope \ No newline at end of file diff --git a/website/content/en/docs/versioning.md b/website/content/en/docs/versioning.md index dac53dd51c2..e6bd1fdff11 100644 --- a/website/content/en/docs/versioning.md +++ b/website/content/en/docs/versioning.md @@ -1,7 +1,7 @@ --- title: Versioning for Operator SDK linkTitle: Versioning -weight: 50 +weight: 70 --- The following is a concise explanation of how Operator SDK versions are determined. The Operator SDK versioning follows [semantic versioning][link-semver] standards. From 9942d8e25838cd3387f12379dd25a0f55af7b05a Mon Sep 17 00:00:00 2001 From: Camila Macedo Date: Fri, 22 May 2020 11:47:20 +0100 Subject: [PATCH 2/3] latest changes requested by hassab --- .../en/docs/kubebuilder/operator-scope.md | 58 +++++-------------- 1 file changed, 14 insertions(+), 44 deletions(-) diff --git a/website/content/en/docs/kubebuilder/operator-scope.md b/website/content/en/docs/kubebuilder/operator-scope.md index 62b8f06aad9..e8d464a2eb4 100644 --- a/website/content/en/docs/kubebuilder/operator-scope.md +++ b/website/content/en/docs/kubebuilder/operator-scope.md @@ -71,7 +71,6 @@ In the above example, a CR created in a Namespace not in the set passed to `Opti its controller because the [Manager][ctrl-manager] does not manage that Namespace. **IMPORTANT:** Note that this is not intended to be used for excluding Namespaces, this is better done via a Predicate. -Also note that you may face performance issues when managing a high number of Namespaces. ## Restricting Roles and permissions @@ -79,11 +78,12 @@ An operator's scope defines its [Manager's][ctrl-manager] cache's scope but not After updating the Manager's scope to be Namespaced, the cluster's [Role-Based Access Control (RBAC)][k8s-rbac] permissions should be restricted accordingly. -These permissions are found in the directory `config/rbac/`. The `ClusterRole` and `ClusterRoleBinding` manifests are -responsible for creating the permissions which allow the operator to have access to the resources. +These permissions are found in the directory `config/rbac/`. The `ClusterRole` in `role.yaml` and `ClusterRoleBinding` +in `role_binding.yaml` are used to grant the operator permissions to access and manage its resources. -**NOTE** Only `ClusterRole` and `ClusterRoleBinding` manifests require changes to achieve this goal. -Ignore `_editor_role.yaml`, `_viewer_role.yaml`, and files a name pattern of `auth_proxy_*.yaml`. +**NOTE** For changing the operator's scope only the `role.yaml` and `role_binding.yaml` manifests need to be updated. +For the purposes of this doc, the other RBAC manifests `_editor_role.yaml`, `_viewer_role.yaml`, +and `auth_proxy_*.yaml` are not relevant to changing the operator's resource permissions. ### Changing the permissions @@ -92,7 +92,7 @@ and `RoleBinding`s instead of `ClusterRole`s and `ClusterRoleBinding`s, respecti [`RBAC markers`][rbac-markers] defined in the controller (e.g `controllers/memcached_controller.go`) are used to generate the operator's [RBAC ClusterRole][rbac-clusterrole] (e.g `config/rbac/role.yaml`). The default - markers don't specify a `namespace` property and will result in a ClusterRole. + markers don't specify a `namespace` property and will result in a `ClusterRole`. Update the RBAC markers to specify a `namespace` property so that `config/rbac/role.yaml` is generated as a `Role` instead of a `ClusterRole`. @@ -154,8 +154,8 @@ variables to allow the restrictive configurations // getWatchNamespace returns the Namespace the operator should be watching for changes func getWatchNamespace() (string, error) { // WatchNamespaceEnvVar is the constant for env variable WATCH_NAMESPACE - // which is the Namespace where the watch activity happens. - // this value is empty if the operator is running with clusterScope. + // which specifies the Namespace to watch. + // An empty value means the operator is running with cluster scope. var watchNamespaceEnvVar = "WATCH_NAMESPACE" ns, found := os.LookupEnv(watchNamespaceEnvVar) @@ -173,7 +173,7 @@ func getWatchNamespace() (string, error) { watchNamespace, err := getWatchNamespace() if err != nil { setupLog.Error(err, "unable to get WatchNamespace, " + - "the manager will watch and manage resources in all cluster") + "the manager will watch and manage resources in all namespaces") } mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ @@ -213,29 +213,12 @@ spec: terminationGracePeriodSeconds: 10 ``` -**NOTE** The above will set as value of `WATCH_NAMESPACE` the namespace where the operator will be deployed. +**NOTE** `WATCH_NAMESPACE` here will always be set as the namespace where the operator is deployed. ### Configuring cluster-scoped operators with MultiNamespacedCacheBuilder -- Add a helper function in the `main.go` file : - -```go -// getWatchNamespace returns the namespace the operator should be watching for changes -func getWatchNamespace() (string, error) { - // WatchNamespaceEnvVar is the constant for env variable WATCH_NAMESPACE - // which is the namespace where the watch activity happens. - // this value is empty if the operator is running with clusterScope. - var watchNamespaceEnvVar = "WATCH_NAMESPACE" - - ns, found := os.LookupEnv(watchNamespaceEnvVar) - if !found { - return "", fmt.Errorf("%s must be set", watchNamespaceEnvVar) - } - return ns, nil -} -``` - -- Use the environment variable value and check if it is an multi-namespace scenario: +- Add a helper function to get the environment variable value in the `main.go` file as done in the in the previous example (e.g `getWatchNamespace()`) +- Use the environment variable value and check if it is a multi-namespace scenario: ```go ... @@ -267,25 +250,12 @@ if strings.Contains(namespace, ",") { - Define the environment variable in the `config/manager/manager.yaml`: ```yaml -spec: - containers: - - command: - - /manager - args: - - --enable-leader-election - image: controller:latest - name: manager - resources: - limits: - cpu: 100m - memory: 30Mi - requests: - cpu: 100m - memory: 20Mi +... env: - name: WATCH_NAMESPACE value: "ns1,ns2" terminationGracePeriodSeconds: 10 +... ``` [cert-manager]: https://github.com/jetstack/cert-manager From f0bf60f48f406843de6d317c30453a72ff9739ee Mon Sep 17 00:00:00 2001 From: Camila Macedo Date: Mon, 25 May 2020 09:33:05 +0100 Subject: [PATCH 3/3] latest nits --- website/content/en/docs/kubebuilder/operator-scope.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/content/en/docs/kubebuilder/operator-scope.md b/website/content/en/docs/kubebuilder/operator-scope.md index e8d464a2eb4..c1bcacd06c9 100644 --- a/website/content/en/docs/kubebuilder/operator-scope.md +++ b/website/content/en/docs/kubebuilder/operator-scope.md @@ -217,7 +217,7 @@ spec: ### Configuring cluster-scoped operators with MultiNamespacedCacheBuilder -- Add a helper function to get the environment variable value in the `main.go` file as done in the in the previous example (e.g `getWatchNamespace()`) +- Add a helper function to get the environment variable value in the `main.go` file as done in the previous example (e.g `getWatchNamespace()`) - Use the environment variable value and check if it is a multi-namespace scenario: ```go @@ -225,7 +225,7 @@ spec: watchNamespace, err := getWatchNamespace() if err != nil { setupLog.Error(err, "unable to get WatchNamespace, " + - "the manager will watch and manage resources in all cluster") + "the manager will watch and manage resources in all Namespaces") } options := ctrl.Options{