- Install the Operator manually
- Install the Bookkeeper cluster manually
- Uninstall the Bookkeeper Cluster manually
- Uninstall the Operator manually
Note: If you are running on Google Kubernetes Engine (GKE), please check this first.
Register the Bookkeeper cluster custom resource definition (CRD).
$ kubectl create -f deploy/crds/crd.yaml
Create the operator role, role binding and service account.
$ kubectl create -f deploy/role.yaml
$ kubectl create -f deploy/role_binding.yaml
$ kubectl create -f deploy/service_account.yaml
Install the operator.
$ kubectl create -f deploy/operator.yaml
Finally create a ConfigMap which contains the list of supported upgrade paths for the Bookkeeper cluster.
$ kubectl create -f deploy/version_map.yaml
The operator can be deployed in test mode
by providing the argument -test
inside the operator.yaml
file in the following way.
containers:
- name: bookkeeper-operator
image: pravega/bookkeeper-operator:latest
ports:
- containerPort: 60000
name: metrics
command:
- bookkeeper-operator
imagePullPolicy: Always
args: [-test]
For more details check this
Note that the Bookkeeper cluster must be installed in the same namespace as the Zookeeper cluster.
If the BookKeeper cluster is expected to work with Pravega, we need to create a ConfigMap which needs to have the following values
KEY | VALUE |
---|---|
PRAVEGA_CLUSTER_NAME | Name of Pravega Cluster using this BookKeeper Cluster |
WAIT_FOR | Zookeeper URL |
To create this ConfigMap.
$ kubectl create -f deploy/config_map.yaml
The name of this ConfigMap needs to be mentioned in the field envVars
present in the BookKeeper Spec. For more details about this ConfigMap refer to this.
Once all these have been installed, you can use the following YAML template to install a small development Bookkeeper Cluster. Create a bookkeeper.yaml
file with the following content.
apiVersion: "bookkeeper.pravega.io/v1alpha1"
kind: "BookkeeperCluster"
metadata:
name: "pravega-bk"
spec:
version: 0.7.0
zookeeperUri: [ZOOKEEPER_HOST]:2181
envVars: bk-config-map
replicas: 3
image:
repository: pravega/bookkeeper
pullPolicy: IfNotPresent
where:
[ZOOKEEPER_HOST]
is the Zookeeper service endpoint of your Zookeeper deployment (e.g.zookeeper-client:2181
). It expects the zookeeper service URL in the given format<service-name>:<port-number>
Check out other sample CR files in the example
directory.
Deploy the Bookkeeper cluster.
$ kubectl create -f bookkeeper.yaml
Verify that the cluster instances and its components are being created.
$ kubectl get bk
NAME VERSION DESIRED MEMBERS READY MEMBERS AGE
pravega-bk 0.7.0 3 0 25s
$ kubectl delete -f bookkeeper.yaml
Once the Bookkeeper cluster has been deleted, make sure to check that the zookeeper metadata has been cleaned up before proceeding with the deletion of the operator. This can be confirmed with the presence of the following log message in the operator logs.
zookeeper metadata deleted
However, if the operator fails to delete this metadata from zookeeper, you will instead find the following log message in the operator logs.
failed to cleanup <bookkeeper-cluster-name> metadata from zookeeper (znode path: /pravega/<pravega-cluster-name>): <error-msg>
The operator additionally sends out a ZKMETA_CLEANUP_ERROR
event to notify the user about this failure. The user can check this event by doing kubectl get events
. The following is the sample describe output of the event that is generated by the operator in such a case
Name: ZKMETA_CLEANUP_ERROR-nn6sd
Namespace: default
Labels: app=bookkeeper-cluster
bookkeeper_cluster=pravega-bk
Annotations: <none>
API Version: v1
Event Time: <nil>
First Timestamp: 2020-04-27T16:53:34Z
Involved Object:
API Version: app.k8s.io/v1beta1
Kind: Application
Name: bookkeeper-cluster
Namespace: default
Kind: Event
Last Timestamp: 2020-04-27T16:53:34Z
Message: failed to cleanup pravega-bk metadata from zookeeper (znode path: /pravega/pravega): failed to delete zookeeper znodes for (pravega-bk): failed to connect to zookeeper: lookup zookeeper-client on 10.100.200.2:53: no such host
Metadata:
Creation Timestamp: 2020-04-27T16:53:34Z
Generate Name: ZKMETA_CLEANUP_ERROR-
Resource Version: 864342
Self Link: /api/v1/namespaces/default/events/ZKMETA_CLEANUP_ERROR-nn6sd
UID: 5b4c3f80-36b5-43e6-b417-7992bc309218
Reason: ZK Metadata Cleanup Failed
Reporting Component: bookkeeper-operator
Reporting Instance: bookkeeper-operator-6769886978-xsjx6
Source:
Type: Error
Events: <none>
In case the operator fails to delete the zookeeper metadata, the user is expected to manually delete the metadata from zookeeper prior to reinstall.
Note that the Bookkeeper clusters managed by the Bookkeeper operator will NOT be deleted even if the operator is uninstalled. To delete all clusters, delete all cluster CR objects before uninstalling the operator.
$ kubectl delete -f deploy