From 7951991f11628d1fa449decf22706f697f9cfebd Mon Sep 17 00:00:00 2001 From: "abby.huang" <78209557+abby-cyber@users.noreply.github.com> Date: Tue, 16 Nov 2021 17:09:17 +0800 Subject: [PATCH] Operator new&updates for 260 (#921) * Update 4.connect-to-nebula-graph-service.md (#915) * Update 4.connect-to-nebula-graph-service.md * Update 4.connect-to-nebula-graph-service.md * Update 4.connect-to-nebula-graph-service.md * Update 4.connect-to-nebula-graph-service.md * Create 9.upgrade-nebula-cluster.md (#913) * Create 9.upgrade-nebula-cluster.md * Update 9.upgrade-nebula-cluster.md * Update 9.upgrade-nebula-cluster.md * Update 9.upgrade-nebula-cluster.md * Update 9.upgrade-nebula-cluster.md * custom-conf-parameters&pv-claim&balance-data (#908) * custom-conf-parameters&pv-claim&balance-data * Update 3.1create-cluster-with-kubectl.md * Update 3.2create-cluster-with-helm.md * update yaml * Update 7.operator-faq.md * update helm * Update 3.2create-cluster-with-helm.md * error fix * Update 8.3.balance-data-when-scaling-storage.md * add crd updates (#919) * Update 8.1.custom-conf-parameter.md (#920) period typo --- docs-2.0/20.appendix/6.eco-tool-version.md | 3 +- .../1.introduction-to-nebula-operator.md | 5 +- .../2.deploy-nebula-operator.md | 109 ++++++++-- .../3.1create-cluster-with-kubectl.md | 63 ++++-- .../3.2create-cluster-with-helm.md | 48 +++-- .../4.connect-to-nebula-graph-service.md | 110 +++++++++- docs-2.0/nebula-operator/7.operator-faq.md | 2 +- .../8.1.custom-conf-parameter.md | 65 ++++++ .../8.2.pv-reclaim.md | 98 +++++++++ .../8.3.balance-data-when-scaling-storage.md | 104 ++++++++++ .../9.upgrade-nebula-cluster.md | 196 ++++++++++++++++++ mkdocs.yml | 28 +-- 12 files changed, 755 insertions(+), 76 deletions(-) create mode 100644 docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md create mode 100644 docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md create mode 100644 docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md create mode 100644 docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md diff --git a/docs-2.0/20.appendix/6.eco-tool-version.md b/docs-2.0/20.appendix/6.eco-tool-version.md index 6d2a36018c2..5dc76e3f89d 100644 --- a/docs-2.0/20.appendix/6.eco-tool-version.md +++ b/docs-2.0/20.appendix/6.eco-tool-version.md @@ -71,8 +71,7 @@ Nebula Operator (Operator for short) is a tool to automate the deployment, opera |Nebula Graph version|Operator version(commit id)| |:---|:---| -| {{ nebula.release }} | {{operator.release}}(6d1104e) | ---> +| {{ nebula.release }} | {{operator.release}}(ba88e28) | ## Nebula Importer diff --git a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md index 6eb862b65e8..2c2b9f141a8 100644 --- a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md @@ -18,6 +18,8 @@ The following features are already available in Nebula Operator: - **Scale clusters**: Nebula Operator calls Nebula Graph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md#_3) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md#_2). +- **Cluster Upgrade**: Nebula Operator supports cluster upgrading from version 2.5.x to version 2.6.x. + - **Self-Healing**: Nebula Operator calls interfaces provided by Nebula Graph clusters to dynamically sense cluster service status. Once an exception is detected, Nebula Operator performs fault tolerance. For more information, see [Self-Healing](5.operator-failover.md). - **Balance Scheduling**: Based on the scheduler extension interface, the scheduler provided by Nebula Operator evenly distributes Pods in a Nebula Graph cluster across all nodes. @@ -30,7 +32,8 @@ Nebula Operator does not support the v1.x version of Nebula Graph. Nebula Operat | Nebula Operator version | Nebula Graph version | | ------------------- | ---------------- | -| {{operator.release}}| {{nebula.release}} | +| {{operator.release}}| 2.5.x ~ 2.6.x | +|0.8.0|2.5.x| ### Feature limitations diff --git a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md index 79c261083ab..446b3be595d 100644 --- a/docs-2.0/nebula-operator/2.deploy-nebula-operator.md +++ b/docs-2.0/nebula-operator/2.deploy-nebula-operator.md @@ -69,12 +69,18 @@ If using a role-based access control policy, you need to enable [RBAC](https://k 3. Install Nebula Operator. ```bash - helm install nebula-operator nebula-operator/nebula-operator --namespace= --version=${chart_version} + helm install nebula-operator nebula-operator/nebula-operator --namespace= --version=${chart_version} ``` - - `` is a user-created namespace name. If you have not created this namespace, run `kubectl create namespace nebula-operator-system` to create one. You can also use a different name. + For example, the command to install Nebula Operator of version {{operator.release}} is as follows. + + ```bash + helm install nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} + ``` + + - `nebula-operator-system` is a user-created namespace name. If you have not created this namespace, run `kubectl create namespace nebula-operator-system` to create one. You can also use a different name. - - `${chart_version}` is the version of the Nebula Operator chart. It can be unspecified when there is only one chart version in the Nebula Operator chart repository. Run `helm search repo -l nebula-operator` to see chart versions. + - `{{operator.release}}` is the version of the Nebula Operator chart. It can be unspecified when there is only one chart version in the Nebula Operator chart repository. Run `helm search repo -l nebula-operator` to see chart versions. You can customize the configuration items of the Nebula Operator chart before running the installation command. For more information, see **Customize Helm charts** below. @@ -88,14 +94,14 @@ For example: [k8s@master ~]$ helm show values nebula-operator/nebula-operator image: nebulaOperator: - image: vesoft/nebula-operator:v0.8.0 - imagePullPolicy: IfNotPresent + image: vesoft/nebula-operator:{{operator.branch}} + imagePullPolicy: Always kubeRBACProxy: image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0 - imagePullPolicy: IfNotPresent + imagePullPolicy: Always kubeScheduler: image: k8s.gcr.io/kube-scheduler:v1.18.8 - imagePullPolicy: IfNotPresent + imagePullPolicy: Always imagePullSecrets: [] kubernetesClusterDomain: "" @@ -106,11 +112,11 @@ controllerManager: env: [] resources: limits: - cpu: 100m - memory: 30Mi + cpu: 200m + memory: 200Mi requests: cpu: 100m - memory: 20Mi + memory: 100Mi admissionWebhook: create: true @@ -122,21 +128,21 @@ scheduler: env: [] resources: limits: - cpu: 100m - memory: 30Mi + cpu: 200m + memory: 20Mi requests: cpu: 100m - memory: 20Mi + memory: 100Mi ``` -The parameters in `values.yaml` are described as follows: +Part of the above parameters are described as follows: | Parameter | Default value | Description | | :------------------------------------- | :------------------------------ | :----------------------------------------- | -| `image.nebulaOperator.image` | `vesoft/nebula-operator:v0.8.0` | The image of Nebula Operator, version of which is v0.8.0. | +| `image.nebulaOperator.image` | `vesoft/nebula-operator:{{operator.branch}}` | The image of Nebula Operator, version of which is {{operator.release}}. | | `image.nebulaOperator.imagePullPolicy` | `IfNotPresent` | The image pull policy in Kubernetes. | | `imagePullSecrets` | - | The image pull secret in Kubernetes. | -| `kubernetesClusterDomain` | `cluster.local` | The cluster domain. | +| `kubernetesClusterDomain` | `cluster.local` | The cluster domain. | | `controllerManager.create` | `true` | Whether to enable the controller-manager component. | | `controllerManager.replicas` | `2` | The numeric value of controller-manager replicas. | | `admissionWebhook.create` | `true` | Whether to enable Admission Webhook. | @@ -154,7 +160,7 @@ helm install nebula-operator nebula-operator/nebula-operator --namespace= -f ${HOME}/nebula-operator/charts/nebula-operator/values.yaml + helm upgrade nebula-operator nebula-operator/nebula-operator --namespace= -f ${HOME}/nebula-operator/charts/nebula-operator/values.yaml + ``` + + `` is a user-created namespace name. Pods related to the nebula-operator repository are in this namespace. + +### Upgrade Nebula Operator + +!!! Compatibility "Legacy version compatibility" + + Starting from Nebula Operator 0.9.0, logs and data are stored separately. Managing a Nebula Graph 2.5.x cluster with Nebula Operator 0.9.0 or later versions can cause compatibility issues. You can backup the data of the Nebula Graph 2.5.x cluster and then create a 2.6.x cluster with Operator. + +1. Update the information of available charts locally from chart repositories. + + ```bash + helm repo update + ``` + +2. Upgrade Operator. + + ```bash + helm upgrade nebula-operator nebula-operator/nebula-operator --namespace= --version={{operator.release}} + ``` + + For example: + + ```bash + helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} + ``` + + Output: + + ```bash + Release "nebula-operator" has been upgraded. Happy Helming! + NAME: nebula-operator + LAST DEPLOYED: Tue Nov 16 02:21:08 2021 + NAMESPACE: nebula-operator-system + STATUS: deployed + REVISION: 3 + TEST SUITE: None + NOTES: + Nebula Operator installed! + ``` + +3. Pull the latest CRD configuration file. + + !!! note + You need to upgrade the corresponding CRD configurations after Nebula Operator is upgraded. Otherwise, the creation of Nebula Graph clusters will fail. For information about the CRD configurations, see [apps.nebula-graph.io_nebulaclusters.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml). + + ```bash + helm pull nebula-operator/nebula-operator + ``` + +4. Upgrade the CRD configuration file. + + ```bash + kubectl apply -f .yaml + ``` + + For example: + + ```bash + kubectl apply -f config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml ``` - `` is a user-created namespace name. Pods related to the nebula-operator repository are in this namespace. + Output: + + ```bash + customresourcedefinition.apiextensions.k8s.io/nebulaclusters.apps.nebula-graph.io created + ``` ### Uninstall Nebula Operator diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index 3528b890cb1..c610a956901 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -28,11 +28,11 @@ The following example shows how to create a Nebula Graph cluster by creating a c memory: "1Gi" replicas: 1 image: vesoft/nebula-graphd - version: v2.5.1 + version: {{nebula.branch}} service: type: NodePort externalTrafficPolicy: Local - storageClaim: + logVolumeClaim: resources: requests: storage: 2Gi @@ -47,8 +47,13 @@ The following example shows how to create a Nebula Graph cluster by creating a c memory: "1Gi" replicas: 1 image: vesoft/nebula-metad - version: v2.5.1 - storageClaim: + version: {{nebula.branch}} + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + logVolumeClaim: resources: requests: storage: 2Gi @@ -63,8 +68,13 @@ The following example shows how to create a Nebula Graph cluster by creating a c memory: "1Gi" replicas: 3 image: vesoft/nebula-storaged - version: v2.5.1 - storageClaim: + version: {{nebula.branch}} + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + logVolumeClaim: resources: requests: storage: 2Gi @@ -73,7 +83,7 @@ The following example shows how to create a Nebula Graph cluster by creating a c name: statefulsets.apps version: v1 schedulerName: default-scheduler - imagePullPolicy: IfNotPresent + imagePullPolicy: Always ``` The parameters in the file are described as follows: @@ -83,17 +93,19 @@ The following example shows how to create a Nebula Graph cluster by creating a c | `metadata.name` | - | The name of the created Nebula Graph cluster. | | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | | `spec.graphd.images` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `v2.5.1` | The version of the Graphd service. | + | `spec.graphd.version` | `{{nebula.branch}}` | The version of the Graphd service. | | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.storageClaim` | - | The storage configurations for the Graphd service. | + | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | | `spec.metad.images` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `v2.5.1` | The version of the Metad service. | - | `spec.metad.storageClaim` | - | The storage configurations for the Metad service. | + | `spec.metad.version` | `{{nebula.branch}}` | The version of the Metad service. | + | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | + | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | | `spec.storaged.images` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `v2.5.1` | The version of the Storaged service. | - | `spec.storaged.storageClaim` | - | The storage configurations for the Storaged service. | + | `spec.storaged.version` | `{{nebula.branch}}` | The version of the Storaged service. | + | `spec.storaged.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Storaged service. | + | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| | `spec.reference.name` | - | The name of the dependent controller. | | `spec.schedulerName` | - | The scheduler name. | | `spec.imagePullPolicy` | The image policy to pull the Nebula Graph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | @@ -119,8 +131,8 @@ The following example shows how to create a Nebula Graph cluster by creating a c Output: ```bash - NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE - nebula-cluster 1 1 1 1 3 3 31h + NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE + nebula 1 1 1 1 3 3 86s ``` ## Scaling clusters @@ -131,25 +143,34 @@ You can modify the value of `replicas` in `apps_v1alpha1_nebulacluster.yaml` to The following shows how to scale out a Nebula Graph cluster by changing the number of Storage services to 5: -1. Change the value of the `storaged.replicas` in `apps_v1alpha1_nebulacluster.yaml` from `3` to `5`. +1. Change the value of the `storaged.replicas` from `3` to `5` in `apps_v1alpha1_nebulacluster.yaml`. ```yaml storaged: resources: requests: - cpu: "1" - memory: "1Gi" + cpu: "500m" + memory: "500Mi" limits: cpu: "1" memory: "1Gi" replicas: 5 image: vesoft/nebula-storaged - version: v2.5.1 - storageClaim: + version: {{nebula.branch}} + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + logVolumeClaim: resources: requests: storage: 2Gi - storageClassName: fast-disks + storageClassName: gp2 + reference: + name: statefulsets.apps + version: v1 + schedulerName: default-scheduler ``` 2. Run the following command to update the Nebula Graph cluster CR. diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index beee3375c1a..6f901ea505d 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -100,26 +100,50 @@ helm uninstall "${NEBULA_CLUSTER_NAME}" --namespace="${NEBULA_CLUSTER_NAMESPACE} | Parameter | Default value | Description | | :-------------------------- | :----------------------------------------------------------- | ------------------------------------------------------------ | | `nameOverride` | `nil` | Replaces the name of the chart in the `Chart.yaml` file. | -| `nebula.version` | `v2.5.1` | The version of Nebula Graph. | +| `nebula.version` | `{{nebula.branch}}` | The version of Nebula Graph. | | `nebula.imagePullPolicy` | `IfNotPresent` | The Nebula Graph image pull policy. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | | `nebula.storageClassName` | `nil` | The StorageClass name. StorageClass is the default persistent volume type. | | `nebula.schedulerName` | `default-scheduler` | The scheduler name of a Nebula Graph cluster. | | `nebula.reference` | `{"name": "statefulsets.apps", "version": "v1"}` | The workload referenced for a Nebula Graph cluster. | -| `nebula.podLabels` | `{}` | Labels for pods in a Nebula Graph cluster. | -| `nebula.podAnnotations` | `{}` | Pod annotations in a Nebula Graph cluster. | | `nebula.graphd.image` | `vesoft/nebula-graphd` | The image name for a Graphd service. Uses the value of `nebula.version` as its version. | -| `nebula.graphd.replicas` | `2` | The number of Graphd services. | -| `nebula.graphd.env` | `[]` | The environment variables for Graphd services. | -| `nebula.graphd.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for Graphd services. | -| `nebula.graphd.storage` | `1Gi` | The storage capacity for Graphd services. | +| `nebula.graphd.replicas` | `2` | The number of the Graphd service. | +| `nebula.graphd.env` | `[]` | The environment variables for the Graphd service. | +| `nebula.graphd.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for the Graphd service. | +| `nebula.graphd.logStorage` | `500Mi` | The log disk storage capacity for the Graphd service. | +| `nebula.graphd.podLabels` | `{}` | Labels for the Graphd pod in a Nebula Graph cluster. | +| `nebula.graphd.podAnnotations` | `{}` | Pod annotations for the Graphd pod in a Nebula Graph cluster. | +| `nebula.graphd.nodeSelector` | `{}` |Labels for the Graphd pod to be scheduled to the specified node. | +| `nebula.graphd.tolerations` | `{}` |Tolerations for the Graphd pod. | +| `nebula.graphd.affinity` | `{}` |Affinity for the Graphd pod. | +| `nebula.graphd.readinessProbe` | `{}` |ReadinessProbe for the Graphd pod.| +| `nebula.graphd.sidecarContainers` | `{}` |Sidecar containers for the Graphd pod. | +| `nebula.graphd.sidecarVolumes` | `{}` |Sidecar volumes for the Graphd pod. | | `nebula.metad.image` | `vesoft/nebula-metad` | The image name for a Metad service. Uses the value of `nebula.version` as its version. | -| `nebula.metad.replicas` | `3` | The number of Metad services. | -| `nebula.metad.env` | `[]` | The environment variables for Metad services. | -| `nebula.metad.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for Metad services. | -| `nebula.metad.storage` | `1Gi` | The storage capacity for Metad services. | +| `nebula.metad.replicas` | `3` | The number of the Metad service. | +| `nebula.metad.env` | `[]` | The environment variables for the Metad service. | +| `nebula.metad.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for the Metad service. | +| `nebula.metad.logStorage` | `500Mi` | The log disk capacity for the Metad service. | +| `nebula.metad.dataStorage` | `1Gi` | The data disk capacity for the Metad service. | +| `nebula.metad.podLabels` | `{}` | Labels for the Metad pod in a Nebula Graph cluster. | +| `nebula.metad.podAnnotations` | `{}` | Pod annotations for the Metad pod in a Nebula Graph cluster. | +| `nebula.metad.nodeSelector` | `{}` | Labels for the Metad pod to be scheduled to the specified node. | +| `nebula.metad.tolerations` | `{}` | Tolerations for the Metad pod. | +| `nebula.metad.affinity` | `{}` | Affinity for the Metad pod. | +| `nebula.metad.readinessProbe` | `{}` | ReadinessProbe for the Metad pod. | +| `nebula.metad.sidecarContainers` | `{}` | Sidecar containers for the Metad pod. | +| `nebula.metad.sidecarVolumes` | `{}` | Sidecar volumes for the Metad pod. | | `nebula.storaged.image` | `vesoft/nebula-storaged` | The image name for a Storaged service. Uses the value of `nebula.version` as its version. | | `nebula.storaged.replicas` | `3` | The number of Storaged services. | | `nebula.storaged.env` | `[]` | The environment variables for Storaged services. | | `nebula.storaged.resources` | `{"resources":{"requests":{"cpu":"500m","memory":"500Mi"},"limits":{"cpu":"1","memory":"1Gi"}}}` | The resource configurations for Storagedss services. | -| `nebula.storaged.storage` | `1Gi` | The storage capacity for Storaged services. | +| `nebula.storaged.logStorage` | `500Mi` | The log disk capacity for the Metad service. | +| `nebula.storaged.dataStorage` | `1Gi` | The data disk capacity for the Metad service. | +| `nebula.storaged.podLabels` | `{}` | Labels for the Metad pod in a Nebula Graph cluster. | +| `nebula.storaged.podAnnotations` |`{}` | Pod annotations for the Metad pod in a Nebula Graph cluster. | +| `nebula.storaged.nodeSelector` | `{}` | Labels for the Metad pod to be scheduled to the specified node. | +| `nebula.storaged.tolerations` | `{}` | Tolerations for the Metad pod. | +| `nebula.storaged.affinity` | `{}` | Affinity for the Metad pod. | +| `nebula.storaged.readinessProbe` | `{}` | ReadinessProbe for the Metad pod. | +| `nebula.storaged.sidecarContainers` | `{}` | Sidecar containers for the Metad pod. | +| `nebula.storaged.sidecarVolumes` | `{}` | Sidecar volumes for the Metad pod. | | `imagePullSecrets` | `[]` | The Secret to pull the Nebula Graph cluster image. | \ No newline at end of file diff --git a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md index 5ec774861c7..0fc44bd49ce 100644 --- a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md @@ -25,9 +25,14 @@ When a Nebula Graph cluster is created, Nebula Operator automatically creates a 2. Run the following command to connect to the Nebula Graph database using the IP of the `-graphd-svc` Service above: ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- -addr <10.98.213.34> -port 9669 -u -p + kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- -addr -port -u -p ``` + For example: + + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft + - `--image`: The image for the tool Nebula Console used to connect to Nebula Graph databases. - ``: The custom Pod name. - `-addr`: The IP of the `ClusterIP` Service, used to connect to Graphd services. @@ -46,14 +51,14 @@ When a Nebula Graph cluster is created, Nebula Operator automatically creates a You can also connect to Nebula Graph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`: ```bash -kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port 9669 -u root -p vesoft +kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p ``` The default value of `CLUSTER_DOMAIN` is `cluster.local`. -## Connect to Nebula Graph databases from outside a Nebula Graph cluster +## Connect to Nebula Graph databases from outside a Nebula Graph cluster via `NodePort` -You can create a Service of type `NodePort` to connect to Nebula Graph databases from outside a Nebula Graph cluster with a node IP and an exposed node port. You can also use load balancing software provided by cloud providers and set the Service of type `LoadBalancer`. +You can create a Service of type `NodePort` to connect to Nebula Graph databases from outside a Nebula Graph cluster with a node IP and an exposed node port. You can also use load balancing software provided by cloud providers (such as Azure, AWS, etc.) and set the Service of type `LoadBalancer`. The Service of type `NodePort` forwards the front-end requests via the label selector `spec.selector` to Graphd pods with labels `app.kubernetes.io/cluster: ` and `app.kubernetes.io/component: graphd`. @@ -121,13 +126,13 @@ Steps: 4. Connect to Nebula Graph databases with your node IP and the node port above. ```bash - kubectl run -ti --image vesoft/nebula-console:{{console.branch}}--restart=Never -- -addr -port -u root -p vesoft + kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- -addr -port -u -p ``` - Example: + For example: ```bash - [root@k8s4 ~]# kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- nebula-console2 -addr 192.168.8.24 -port 32236 -u root -p vesoft + kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- nebula-console2 -addr 192.168.8.24 -port 32236 -u root -p vesoft If you don't see a command prompt, try pressing enter. (root@nebula) [(none)]> @@ -139,4 +144,93 @@ Steps: - `-port`: The mapped port of Nebula Graph databases on all cluster nodes. The above example uses `32236`. - `-u`: The username of your Nebula Graph account. Before enabling authentication, you can use any existing username. The default username is root. - `-p`: The password of your Nebula Graph account. Before enabling authentication, you can use any characters as the password. - \ No newline at end of file + +## Connect to Nebula Graph databases from outside a Nebula Graph cluster via Ingress + +Nginx Ingress is an implementation of Kubernetes Ingress. Nginx Ingress watches the Ingress resource of a Kubernetes cluster and generates the Ingress rules into Nginx configurations that enable Nginx to forward 7 layers of traffic. + +You can use Nginx Ingress to connect to a Nebula Graph cluster from outside the cluster using a combination of the HostNetwork and DaemonSet pattern. + +As HostNetwork is used, the Nginx Ingress pod cannot be scheduled to the same node. To avoid listening port conflicts, some nodes can be selected and labeled as edge nodes in advance, which are specially used for the Nginx Ingress deployment. Nginx Ingress is then deployed on these nodes in a DaemonSet mode. + +Ingress does not support TCP or UDP services. For this reason, the nginx-ingress-controller pod uses the flags `--tcp-services-configmap` and `--udp-services-configmap` to point to an existing ConfigMap where the key refers to the external port to be used and the value refers to the format of the service to be exposed. The format of the value is `:`. + +For example, the configurations of the ConfigMap named as `tcp-services` is as follows: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: tcp-services + namespace: nginx-ingress +data: + # update + 9769: "default/nebula-graphd-svc:9669" +``` + +Steps are as follows. + +1. Create a file named `nginx-ingress-daemonset-hostnetwork.yaml`. + + Click on [nginx-ingress-daemonset-hostnetwork.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/config/samples/nginx-ingress-daemonset-hostnetwork.yaml) to view the complete content of the example YAML file. + + !!! note + + The resource objects in the YAML file above use the namespace `nginx-ingress`. You can run `kubectl create namespace nginx-ingress` to create this namespace, or you can customize the namespace. + +2. Label a node where the DaemonSet named `nginx-ingress-controller` in the above YAML file (The node used in this example is named `worker2` with an IP of `192.168.8.160`) runs. + + ```bash + kubectl label node worker2 nginx-ingress=true + ``` + +3. Run the following command to enable Nginx Ingress in the cluster you created. + + ```bash + kubectl create -f nginx-ingress-daemonset-hostnetwork.yaml + ``` + + Output: + + ```bash + configmap/nginx-ingress-controller created + configmap/tcp-services created + serviceaccount/nginx-ingress created + serviceaccount/nginx-ingress-backend created + clusterrole.rbac.authorization.k8s.io/nginx-ingress created + clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created + role.rbac.authorization.k8s.io/nginx-ingress created + rolebinding.rbac.authorization.k8s.io/nginx-ingress created + service/nginx-ingress-controller-metrics created + service/nginx-ingress-default-backend created + service/nginx-ingress-proxy-tcp created + daemonset.apps/nginx-ingress-controller created + ``` + + Since the network type that is configured in Nginx Ingress is `hostNetwork`, after successfully deploying Nginx Ingress, with the IP (`192.168.8.160`) of the node where Nginx Ingress is deployed and with the external port (`9769`) you define, you can access Nebula Graph. + +4. Use the IP address and the port configured in the preceding steps. You can connect to Nebula Graph with Nebula Console. + + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- -addr -port -u -p + ``` + + Output: + + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.branch}} --restart=Never -- nebula-console -addr 192.168.8.160 -port 9769 -u root -p vesoft + ``` + + - `--image`: The image for the tool Nebula Console used to connect to Nebula Graph databases. + - `` The custom Pod name. The above example uses `nebula-console`. + - `-addr`: The IP of the node where Nginx Ingress is deployed. The above example uses `192.168.8.160`. + - `-port`: The port used for external network access. The above example uses `9769`. + - `-u`: The username of your Nebula Graph account. Before enabling authentication, you can use any existing username. The default username is root. + - `-p`: The password of your Nebula Graph account. Before enabling authentication, you can use any characters as the password. + + A successful connection to the database is indicated if the following is returned: + + ```bash + If you don't see a command prompt, try pressing enter. + (root@nebula) [(none)]> + ``` \ No newline at end of file diff --git a/docs-2.0/nebula-operator/7.operator-faq.md b/docs-2.0/nebula-operator/7.operator-faq.md index f0cd4828fc6..f1fe53f6868 100644 --- a/docs-2.0/nebula-operator/7.operator-faq.md +++ b/docs-2.0/nebula-operator/7.operator-faq.md @@ -6,7 +6,7 @@ No, because the v1.x version of Nebula Graph does not support DNS, and Nebula Op ## Does Nebula Operator support the rolling upgrade feature for Nebula Graph clusters? -Not available at the moment. +Nebula Operator currently supports cluster upgrading from version 2.5.x to version 2.6.x. ## Is cluster stability guaranteed if using local storage? diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md new file mode 100644 index 00000000000..d5b9cbffe8f --- /dev/null +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md @@ -0,0 +1,65 @@ +# Customize configuration parameters for a Nebula Graph cluster + +Meta, Storage, and Graph services in a Nebula Cluster have their configurations, which are defined as `config` in the YAML file of the CR instance (Nebula Graph cluster) you created. The settings in `config` are mapped and loaded into the ConfigMap of the corresponding service in Kubernetes. + +!!! note + + It is not available to customize configuration parameters for Nebula Clusters deployed with Helm. + +The structure of `config` is as follows. + +``` +Config map[string]string `json:"config,omitempty"` +``` +## Prerequisites + +You have created a Nebula Graph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). + + +## Steps + +The following example uses a cluster named `nebula` to show how to set `config` for the Graph service in a Nebula Graph cluster. + +1. Run the following command to access the edit page of the `nebula` cluster. + + ```bash + kubectl edit nebulaclusters.apps.nebula-graph.io nebula + ``` + +2. Add `enable_authorize` and `auth_type` under `spec.graphd.config`. + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + graphd: + resources: + requests: + cpu: "500m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: vesoft/nebula-graphd + version: {{nebula.branch}} + storageClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + config: // Custom configuration parameters for the Graph service in a cluster. + "enable_authorize": "true" + "auth_type": "password" + ... + ``` + +After customizing the parameters `enable_authorize` and `auth_type`, the configurations in the corresponding ConfigMap (`nebula-graphd`) of the Graph service will be overwritten. + +## Learn more + +For more information on the configuration parameters of Meta, Storage, and Graph services, see [Configurations](../../5.configurations-and-logs/1.configurations/1.configurations.md). + diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md new file mode 100644 index 00000000000..920e1ad657e --- /dev/null +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md @@ -0,0 +1,98 @@ +# Reclaim PVs + +Nebula Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume Claims) to store persistent data. If you accidentally deletes a Nebula Graph cluster, PV and PVC objects and the relevant data will be retained to ensure data security. + +You can define whether to reclaim PVs or not in the configuration file of the cluster's CR instance with the parameter `enablePVReclaim`. + +If you need to release a graph space and retain the relevant data, update your nebula cluster by setting the parameter `enablePVReclaim` to `true`. + +## Prerequisites + +You have created a cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). + +## Steps + +The following example uses a cluster named `nebula` to show how to set `enablePVReclaim`: + +1. Run the following command to access the edit page of the `nebula` cluster. + + ```bash + kubectl edit nebulaclusters.apps.nebula-graph.io nebula + ``` + +2. Add `enablePVReclaim` and set its value to `true` under `spec`. + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + spec: + enablePVReclaim: true //Set its value to true. + graphd: + image: vesoft/nebula-graphd + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + replicas: 1 + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 500Mi + version: {{nebula.branch}} + imagePullPolicy: IfNotPresent + metad: + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + image: vesoft/nebula-metad + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + replicas: 1 + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 500Mi + version: {{nebula.branch}} + nodeSelector: + nebula: cloud + reference: + name: statefulsets.apps + version: v1 + schedulerName: default-scheduler + storaged: + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + image: vesoft/nebula-storaged + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + replicas: 3 + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 500Mi + version: {{nebula.branch}} + ... + ``` diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md new file mode 100644 index 00000000000..8ec9d8792f6 --- /dev/null +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md @@ -0,0 +1,104 @@ +# Balance storage data after scaling out + +After the Storage service is scaled out, you can decide whether to balance the data in the Storage service. + +The scaling out of the Nebula Graph's Storage service is divided into two stages. In the first stage, the status of all pods is changed to `Ready`. In the second stage, the commands of `BALANCE DATA`和`BALANCE LEADER` are executed to balance data. These two stages decouple the scaling out process of the controller replica from the balancing data process, so that you can choose to perform the data balancing operation during low traffic period. The decoupling of the scaling out process from the balancing process can effectively reduce the impact on online services during data migration. + +You can define whether to balance data automatically or not with the parameter `enableAutoBalance` in the configuration file of the CR instance of the cluster you created. + +## Prerequisites + +You have created a Nebula Graph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). + +## Steps + +The following example uses a cluster named `nebula` to show how to set `enableAutoBalance`. + +1. Run the following command to access the edit page of the `nebula` cluster. + + ```bash + kubectl edit nebulaclusters.apps.nebula-graph.io nebula + ``` + +2. Add `enableAutoBalance` and set its value to `true` under `spec.storaged`. + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + spec: + graphd: + image: vesoft/nebula-graphd + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + replicas: 1 + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 500Mi + version: {{nebula.branch}} + imagePullPolicy: IfNotPresent + metad: + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + image: vesoft/nebula-metad + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + replicas: 1 + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 500Mi + version: {{nebula.branch}} + nodeSelector: + nebula: cloud + reference: + name: statefulsets.apps + version: v1 + schedulerName: default-scheduler + storaged: + enableAutoBalance: true //Set its value to true which means storage data will be balanced after the Storage service is scaled out. + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + image: vesoft/nebula-storaged + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: fast-disks + replicas: 3 + resources: + limits: + cpu: "1" + memory: 1Gi + requests: + cpu: 500m + memory: 500Mi + version: {{nebula.branch}} + ... + ``` + + - When the value of `enableAutoBalance` is set to `true`, the Storage data will be automatically balanced after the Storage service is scaled out. + + - When the value of `enableAutoBalance` is set to `false`, the Storage data will not be automatically balanced after the Storage service is scaled out. + + - When the `enableAutoBalance` parameter is not set, the system will not automatically balance Storage data by default after the Storage service is scaled out. \ No newline at end of file diff --git a/docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md b/docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md new file mode 100644 index 00000000000..7a8130a75a6 --- /dev/null +++ b/docs-2.0/nebula-operator/9.upgrade-nebula-cluster.md @@ -0,0 +1,196 @@ +# Upgrade Nebula Graph clusters created with Nebula Operator + +This topic introduces how to upgrade a Nebula Graph cluster created with Nebula Operator. + + +## Limits + +- Only Nebula Graph clusters created with Nebula Operator are supported. + +- Only upgrading Nebula Graph from 2.5.x to 2.6.x is supported. + +- Upgrading clusters created via Nebula Operator of version 0.8.0 is not supported. + + +## Upgrade a Nebula Graph cluster with Kubectl + +### Prerequisites + +You have created a Nebula Graph cluster with Kubectl. For details, see [Create a Nebula Graph cluster with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). + +The version of the Nebula Graph cluster to be upgraded in this topic is `2.5.1`, and its YAML file name is `apps_v1alpha1_nebulacluster.yaml`. + + +### Steps + +1. Check the image version of the services in the cluster. + + ```bash + kubectl get pods -l app.kubernetes.io/cluster=nebula -o jsonpath="{.items[*].spec.containers[*].image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c + ``` + + Output: + + ```bash + 1 vesoft/nebula-graphd:v2.5.1 + 1 vesoft/nebula-metad:v2.5.1 + 3 vesoft/nebula-storaged:v2.5.1 + ``` + +2. Edit the `apps_v1alpha1_nebulacluster.yaml` file by changing the values of all the `version` parameters from v2.5.1 to {{nebula.branch}}. + + The modified YAML file reads as follows: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + spec: + graphd: + resources: + requests: + cpu: "500m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: vesoft/nebula-graphd + version: {{nebula.branch}} //Change the value from v2.5.1 to {{nebula.branch}}. + service: + type: NodePort + externalTrafficPolicy: Local + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + metad: + resources: + requests: + cpu: "500m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: vesoft/nebula-metad + version: {{nebula.branch}} //Change the value from v2.5.1 to {{nebula.branch}}. + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + storaged: + resources: + requests: + cpu: "500m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 3 + image: vesoft/nebula-storaged + version: {{nebula.branch}} //Change the value from v2.5.1 to {{nebula.branch}}. + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + logVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: gp2 + reference: + name: statefulsets.apps + version: v1 + schedulerName: default-scheduler + imagePullPolicy: Always + ``` + +3. Run the following command to apply the version update to the cluster CR. + + ```bash + kubectl apply -f apps_v1alpha1_nebulacluster.yaml + ``` + +4. After waiting for about 2 minutes, run the following command to see if the image versions of the services in the cluster have been changed to {{nebula.branch}}. + + ```bash + kubectl get pods -l app.kubernetes.io/cluster=nebula -o jsonpath="{.items[*].spec.containers[*].image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c + ``` + + Output: + + ```bash + 1 vesoft/nebula-graphd:{{nebula.branch}} + 1 vesoft/nebula-metad:{{nebula.branch}} + 3 vesoft/nebula-storaged:{{nebula.branch}} + ``` + +## Upgrade a Nebula Graph cluster with Helm + +### Prerequisites + +You have created a Nebula Graph cluster with Helm. For details, see [Create a Nebula Graph cluster with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). + +### Steps + +1. Update the information of available charts locally from chart repositories. + + ```bash + helm repo update + ``` + +2. Set environment variables to your desired values. + + ```bash + export NEBULA_CLUSTER_NAME=nebula # The desired Nebula Graph cluster name. + export NEBULA_CLUSTER_NAMESPACE=nebula # The desired namespace where your Nebula Graph cluster locates. + ``` + +3. Upgrade a Nebula Graph cluster. + + For example, upgrade a cluster to {{nebula.branch}}. + + ```bash + helm upgrade "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ + --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ + --set nameOverride=${NEBULA_CLUSTER_NAME} \ + --set nebula.version={{nebula.branch}} + ``` + + The value of `--set nebula.version` specifies the version of the cluster you want to upgrade to. + +4. Run the following command to check the status and version of the upgraded cluster. + + Check cluster status: + + ```bash + $ kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" + NAME READY STATUS RESTARTS AGE + nebula-graphd-0 1/1 Running 0 2m + nebula-graphd-1 1/1 Running 0 2m + nebula-metad-0 1/1 Running 0 2m + nebula-metad-1 1/1 Running 0 2m + nebula-metad-2 1/1 Running 0 2m + nebula-storaged-0 1/1 Running 0 2m + nebula-storaged-1 1/1 Running 0 2m + nebula-storaged-2 1/1 Running 0 2m + ``` + + Check cluster version: + + ```bash + $ kubectl get pods -l app.kubernetes.io/cluster=nebula -o jsonpath="{.items[*].spec.containers[*].image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c + 1 vesoft/nebula-graphd:{{nebula.branch}} + 1 vesoft/nebula-metad:{{nebula.branch}} + 3 vesoft/nebula-storaged:{{nebula.branch}} + ``` \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index db17fff2b35..535e89d7738 100755 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -110,8 +110,8 @@ extra: release: 1.0.0 branch: master operator: - release: 0.8.0 - branch: v0.8.0 + release: 0.9.0 + branch: v0.9.0 nav: @@ -433,16 +433,20 @@ nav: - Import data from SST files: nebula-exchange/use-exchange/ex-ug-import-from-sst.md - Exchange FAQ: nebula-exchange/ex-ug-FAQ.md -# - Nebula Operator: -# - What is Nebula Operator: nebula-operator/1.introduction-to-nebula-operator.md -# - Overview of using Nebula Operator: nebula-operator/6.get-started-with-operator.md -# - Deploy Nebula Operator: nebula-operator/2.deploy-nebula-operator.md -# - Deploy clusters: -# - Deploy clusters with Kubectl: nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md -# - Deploy clusters with Helm: nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md -# - Connect to Nebula Graph databases: nebula-operator/4.connect-to-nebula-graph-service.md -# - Self-healing: nebula-operator/5.operator-failover.md -# - FAQ: nebula-operator/7.operator-faq.md + - Nebula Operator: + - What is Nebula Operator: nebula-operator/1.introduction-to-nebula-operator.md + - Overview of using Nebula Operator: nebula-operator/6.get-started-with-operator.md + - Deploy Nebula Operator: nebula-operator/2.deploy-nebula-operator.md + - Deploy clusters: + - Deploy clusters with Kubectl: nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md + - Deploy clusters with Helm: nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md + - Configure clusters: + - Custom configuration parameters for a Nebula Graph cluster: nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md + - Reclaim PV: nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md + - Balance storage data after scaling out: nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md + - Connect to Nebula Graph databases: nebula-operator/4.connect-to-nebula-graph-service.md + - Self-healing: nebula-operator/5.operator-failover.md + - FAQ: nebula-operator/7.operator-faq.md - Nebula Algorithm: nebula-algorithm.md