Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update docs #89

Merged
merged 4 commits into from
Nov 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@

v1.x does not support DNS and Nebula Operator requires this feature, it's not compatible.

- When to support upgrading feature ?

The Nebula Operator needs to be consistent with the NebulaGraph, after the database supports rolling upgrades, the Operator will synchronize support for this feature.

- Whether guarantees cluster stability if using local storage ?

There is no guarantee that using local storage, it means that the POD is bound to a particular node. Operators do not currently have the ability to failover when the bound node goes down. This is not an issue with network storage.
Expand Down
69 changes: 63 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ It evolved from [NebulaGraph Cloud Service](https://www.nebula-cloud.io/), makes
- [Install Nebula Operator](#install-nebula-operator)
- [Create and Destroy](#create-and-destroy-a-nebula-cluster)
- [Resize](#resize-a-nebula-cluster)
- [Rolling Upgrade](#upgrade-a-nebula-cluster)
- [Failover](#failover)

### install nebula operator
Expand All @@ -18,7 +19,7 @@ $ kubectl create -f config/samples/apps_v1alpha1_nebulacluster.yaml
```
A none ha-mode nebula cluster will be created.
```bash
$ kubectl get pods -l app.kubernetes.io/instance=nebula
$ kubectl get pods -l app.kubernetes.io/cluster=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 1m
nebula-metad-0 1/1 Running 0 1m
Expand Down Expand Up @@ -63,7 +64,7 @@ Modify the file and change `replicas` from 3 to 5.
memory: "1Gi"
replicas: 5
image: vesoft/nebula-storaged
version: v2.0.1
version: v2.6.1
storageClaim:
resources:
requests:
Expand All @@ -78,7 +79,7 @@ $ kubectl apply -f config/samples/apps_v1alpha1_nebulacluster.yaml

The storaged cluster will scale to 5 members (5 pods):
```bash
$ kubectl get pods -l app.kubernetes.io/instance=nebula
$ kubectl get pods -l app.kubernetes.io/cluster=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 2m
nebula-metad-0 1/1 Running 0 2m
Expand All @@ -101,17 +102,18 @@ Similarly we can decrease the size of the cluster from 5 back to 3 by changing t
memory: "1Gi"
replicas: 3
image: vesoft/nebula-storaged
version: v2.0.1
version: v2.6.1
storageClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
```

We should see that storaged cluster will eventually reduce to 3 pods:

```bash
$ kubectl get pods -l app.kubernetes.io/instance=nebula
$ kubectl get pods -l app.kubernetes.io/cluster=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 10m
nebula-metad-0 1/1 Running 0 10m
Expand All @@ -122,6 +124,46 @@ nebula-storaged-2 1/1 Running 0 10m

In addition, you can [Install Nebula Cluster with helm](doc/user/nebula_cluster_helm_guide.md).

### Upgrade a nebula cluster
Create a nebula cluster with the version specified (v2.5.1):

```bash
$ kubectl apply -f config/samples/apps_v1alpha1_nebulacluster.yaml
$ kubectl get pods -l app.kubernetes.io/cluster=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 25m
nebula-metad-0 1/1 Running 0 26m
nebula-storaged-0 1/1 Running 0 22m
nebula-storaged-1 1/1 Running 0 24m
nebula-storaged-2 1/1 Running 0 25m
```

The container image version should be v2.5.1:

```
$ kubectl get pods -l app.kubernetes.io/cluster=nebula -o jsonpath="{.items[*].spec.containers[*].image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c
1 vesoft/nebula-graphd:v2.5.1
1 vesoft/nebula-metad:v2.5.1
3 vesoft/nebula-storaged:v2.5.1
```

Now modify the file `apps_v1alpha1_nebulacluster.yaml` and change the `version` from v2.5.1 to v2.6.1:

Apply the version change to the cluster CR:

```
$ kubectl apply -f config/samples/apps_v1alpha1_nebulacluster.yaml
```

Wait 2 minutes. The container image version should be updated to v2.6.1:

```
$ kubectl get pods -l app.kubernetes.io/cluster=nebula -o jsonpath="{.items[*].spec.containers[*].image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c
1 vesoft/nebula-graphd:v2.6.1
1 vesoft/nebula-metad:v2.6.1
3 vesoft/nebula-storaged:v2.6.1
```

### Failover
If the minority of nebula components crash, the nebula operator will automatically recover the failure. Let's walk through this in the following steps.

Expand All @@ -139,7 +181,7 @@ $ kubectl delete pod nebula-storaged-2 --now
The nebula operator will recover the failure by creating a new pod `nebula-storaged-2`:

```bash
$ kubectl get pods -l app.kubernetes.io/instance=nebula
$ kubectl get pods -l app.kubernetes.io/cluster=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 15m
nebula-metad-0 1/1 Running 0 15m
Expand All @@ -148,6 +190,21 @@ nebula-storaged-1 1/1 Running 0 15m
nebula-storaged-2 1/1 Running 0 19s
```

## Compatibility matrix

Nebula Operator <-> NebulaGraph

| | NebulaGraph v2.5 | NebulaGraph v2.6 |
|----------------------- |------------------|------------------|
| `v0.8.0` | ✓ | - |
| `v0.9.0`* | ✓ | ✓ |

Key:

* `✓` Compatible.
* `-` Not Compatible.
* `*` Please notice that the StorageClaim is split into LogVolumeClaim and DataVolumeClaim in crd. v0.9.0 can't forward compatible.

## FAQ

Please refer to [FAQ.md](FAQ.md)
Expand Down
13 changes: 13 additions & 0 deletions doc/user/balance.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
## Scale storage nodes and Balance

Scaling out Storage is divided into two stages.

* In the first stage, you need to wait for the status of all newly created Pods to be Ready.

* In the second stage, the BALANCE DATA and BALANCE LEADER command is executed.

We provide a parameter `enableAutoBalance` in crd to control whether to automatically balance data and leader.

Through both stages, the scaling process of the controller replicas is decoupled from the balancing data process and user executing it at low traffic.

Such an implementation can effectively reduce the impact of data migration on online services, which is in line with the Nebula Graph principle: Balancing data is not fully automated and when to balance data is decided by users.
38 changes: 35 additions & 3 deletions doc/user/client_service.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The client service is of type `ClusterIP` and accessible only from within the Ku
For example, access the service from a pod in the cluster:

```shell script
$ kubectl run --rm -ti --image vesoft/nebula-console:v2 --restart=Never -- /bin/sh
$ kubectl run --rm -ti --image vesoft/nebula-console:v2.5.0 --restart=Never -- /bin/sh
/ # nebula-console -u user -p password --address=nebula-graphd-svc --port=9669
2021/04/12 08:16:30 [INFO] connection pool is initialized successfully

Expand Down Expand Up @@ -77,12 +77,44 @@ nebula-metad-headless ClusterIP None <none> 9559/TCP
nebula-storaged-headless ClusterIP None <none> 9779/TCP,19779/TCP,19780/TCP,9778/TCP 23h
```

The graphd client API should now be accessible from outside the Kubernetes cluster:
Now test the connection outside the Kubernetes cluster:

```shell script
/ # nebula-console -u user -p password --address=192.168.8.26 --port=9669
/ # nebula-console -u user -p password --address=192.168.8.26 --port=32236
2021/04/12 08:50:32 [INFO] connection pool is initialized successfully

Welcome to Nebula Graph!
(user@nebula) [(none)]>
```

## Accessing the service via nginx-ingress-controller
Nginx Ingress is an implementation of Kubernetes Ingress. It watch the Ingress resources of the Kubernetes cluster then translate the Ingress rules into Nginx configurations, enabling Nginx to forward layer 7 traffic.

We provide an scenario to replace `NodePort` service, it runs in hostNetwork + daemonSet mode.

As hostNetwork is used, the Nginx Ingress pods cannot be scheduled to the same node. In order to avoid listening port conflicts, some nodes can be selected and labeled as edge nodes in advance, which are specially used to deploy Nginx Ingress. Nginx Ingress is then deployed on these nodes in DaemonSet mode.

Ingress does not support TCP or UDP services. For this reason the nginx-ingress-controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: `<namespace/service name>:<service port>`.

```shell script
$ cat config/samples/nginx-ingress-daemonset-hostnetwork.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: nginx-ingress
data:
# update
9669: "default/nebula-graphd-svc:9669"
```

If ConfigMap tcp-services is configured, then test the connection outside the Kubernetes cluster:

```shell script
/ # nebula-console -addr 192.168.8.25 -port 9669 -u root -p nebula
2021/11/08 14:53:56 [INFO] connection pool is initialized successfully

Welcome to Nebula Graph!

(root@nebula) [(none)]>
```
4 changes: 2 additions & 2 deletions doc/user/custom_config.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# configure custom parameter

For each component has a configuration entry, it defines in crd as config which is a map structure, it will be loaded by configamap.
For each component has a configuration entry, it defines in crd as config which is a map structure, it will be loaded by configmap.
```go
// Config defines a graphd configuration load into ConfigMap
Config map[string]string `json:"config,omitempty"`
Expand All @@ -24,7 +24,7 @@ spec:
memory: "1Gi"
replicas: 1
image: vesoft/nebula-graphd
version: v2.0.1
version: v2.6.1
storageClaim:
resources:
requests:
Expand Down
7 changes: 7 additions & 0 deletions doc/user/pv_reclaim.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# PV reclaim

Nebula Operator uses PV (Persistent Volume) and PVC (Persistent Volume Claim) to store persistent data. If you accidentally delete a nebula cluster, the PV/PVC objects and data are still retained to ensure data safety.

We provide a parameter `enablePVReclaim` in crd to control whether reclaim the pv or not.

If you need release the storage spaces and don't want to retain the data, you can update your nebula instance and set the parameter `enablePVReclaim` to __true__.