This is a follow-up blog of Graph Enabled Infra Ops, which was a demo of how to use NebulaGraph to help OpenStack Infra Ops. In this blog, we will explore how to use NebulaGraph to help Kubernetes Infra Ops.
k8s-graph-demo.mp4
- Install miniKube and create a cluster
Assuming on a Ubuntu 20.04 server with Docker installed.
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
minikube start --vm-driver=none
- Create some resources
- Deploy NebulaGraph with Nebula-Operator
- Deploy APISIX following this blog
- Deploy other sample resources as follows:
kubectl apply -f resources/sample.yaml
kubectl apply -f https://github.com/MrSupiri/kube-ebpf/raw/main/deployment.yaml
kubectl apply -f https://github.com/MrSupiri/kube-ebpf/raw/main/prometheus-deployment.yaml
- get resources and save to files
Get resources from Kubernetes cluster.
cd data
kubectl get pods -o json > pods.json
kubectl get services -o json > services.json
kubectl get deployments -o json > deployments.json
kubectl get replicasets -o json > replicasets.json
kubectl get statefulsets -o json > statefulsets.json
kubectl get daemonsets -o json > daemonsets.json
kubectl get persistentvolumeclaims -o json > persistentvolumeclaims.json
kubectl get persistentvolumes -o json > persistentvolumes.json
kubectl get ingresses -o json > ingresses.json
kubectl get configmaps -o json > configmaps.json
kubectl get secrets -o json > secrets.json
kubectl get horizontalpodautoscalers -o json > horizontalpodautoscalers.json
kubectl get nodes -o json > nodes.json
kubectl get crds --all-namespaces -o json > crds.json
crd_names=$(jq -r '.items[].metadata.name' crds.json)
mkdir -p custom_resources
for crd_name in $crd_names; do
kubectl get $crd_name --all-namespaces -o json > custom_resources/$crd_name.json
done
Get resources with eBPF.
minikube ssh -- sudo apt update
minikube ssh -- sudo apt install linux-headers-5.4.0-146-generic
minikube ssh
apt install zip
# Install bcc & bcc python tools from source
# following https://github.com/iovisor/bcc/blob/master/INSTALL.md#ubuntu---source
# omit the step of installing bcc-tools
export PATH="/usr/share/bcc/tools/:/usr/share/bcc/tools/old/:$PATH"
# replace python to python3 for /usr/share/bcc/tools/tcptracer
grep python3 /usr/share/bcc/tools/tcptracer || sed -i 's/python/python3/g' /usr/share/bcc/tools/tcptracer
# get tcptracer_output.txt
tcptracer > tcptracer_output.txt
exit
# copy tcptracer_output.txt to local
minikube ssh -- cat tcptracer_output.txt > tcptracer_output.txt
- convert json to csv and .ngql NebulaGraph query file
cd data
python3 ../utils/pull_k8s_resources.py
You will get relations.csv
and graph.ngql
in data
folder.
- Prepare the test NebulaGraph cluster
For docker desktop, we could get the NebulaGraph cluster ready with Docker Extension, click Install will do the work.
- Create a graph space from NebulaGraph Studio
CREATE SPACE `k8s` (partition_num = 10, replica_factor = 1, vid_type = FIXED_STRING(256));
USE `k8s`;
- Load the
graph.ngql
generated withpull_k8s_resources.py
to NebulaGraph in Console of NebulaGraph Studio.
Refer to k8s.ipynb for more details.