Skip to content

Latest commit

 

History

History
2676 lines (2203 loc) · 123 KB

kubernetes_rlt.org

File metadata and controls

2676 lines (2203 loc) · 123 KB

minikube for starter

install minikube

https://kubernetes.io/docs/tasks/tools/install-minikube/

install minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \ && chmod +x minikube

[admin@TeamCI-1 ~]$ rm -rf .minikube/ [admin@TeamCI-1 ~]$ rm -rf .kube

### for the first time sarting, minikube will search current dir for .miniku direcotry, if not, it will download it. [admin@TeamCI-1 ~]$ minikube start –driver=kvm2

  • minikube v1.11.0 on Centos 7.5.1804
  • Using the kvm2 driver based on user configuration
  • Downloading driver docker-machine-driver-kvm2:
  • minikube 1.12.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.12.1
  • To disable this notice, run: ‘minikube config set WantUpdateNotification false’

    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [——-] 100.00% ? p/s 0s > docker-machine-driver-kvm2: 13.88 MiB / 13.88 MiB 100.00% 804.25 KiB p/s

  • Downloading VM boot image … > minikube-v1.11.0.iso.sha256: 65 B / 65 B [————-] 100.00% ? p/s 0s > minikube-v1.11.0.iso: 174.99 MiB / 174.99 MiB 100.00% 775.82 KiB p/s 3m5
  • Starting control plane node minikube in cluster minikube
  • Downloading Kubernetes v1.18.3 preload … > preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: 397.34 MiB
  • Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) …

#### for the second time, make sure execute minikube at the direcotry which contain .minkubu dir.

start minikube using driver kvm2 on a physical host

verifying kvm works fine

virt-host-validate

modprobe fuse

fix qemu/kvm authentication error

Jul 07 14:07:33 TeamCI-1 libvirtd[1604]: 2020-07-07 06:07:33.936+0000: 1698: error : virPolkitCheckAuth:128 : authentication unavailable: no polkit agent available to authenti…unix.manage’ Hint: Some lines were ellipsized, use -l to show in full. [root@TeamCI-1 ~]# usermod –append –groups libvirt `whoami

start minikube vm as qemu/kvm

minikube start –driver=<driver_name>

minikube start –driver=kvm2

minikube start as a kvm virutal machine

rm .minikube

[root@TeamCI-1 ~]# virsh list Id Name State


3 minikube running

[root@TeamCI-1 ~]# minikube stop

  • Stopping “minikube” in kvm2 …
  • Node “minikube” stopped.

[root@TeamCI-1 ~]# virsh list Id Name State


[root@TeamCI-1 ~]# virsh list –all Id Name State


  • 762-ts1-172.24.76.101 shut off
  • minikube shut off
  • TAS_1116 shut off
  • testcos7 shut off

minikube provisions and manages local Kubernetes clusters optimized for development workflows.

Basic Commands: start Starts a local Kubernetes cluster status Gets the status of a local Kubernetes cluster stop Stops a running local Kubernetes cluster delete Deletes a local Kubernetes cluster dashboard Access the Kubernetes dashboard running within the minikube cluster pause pause Kubernetes unpause unpause Kubernetes

Images Commands: docker-env Configure environment to use minikube’s Docker daemon podman-env Configure environment to use minikube’s Podman service cache Add, delete, or push a local image into minikube

Configuration and Management Commands: addons Enable or disable a minikube addon config Modify persistent configuration values profile Get or list the the current profiles (clusters) update-context Update kubeconfig in case of an IP or port change

Networking and Connectivity Commands: service Returns a URL to connect to a service tunnel Connect to LoadBalancer services

Advanced Commands: mount Mounts the specified directory into minikube ssh Log into the minikube environment (for debugging) kubectl Run a kubectl binary matching the cluster version node Add, remove, or list additional nodes

Troubleshooting Commands: ssh-key Retrieve the ssh identity key path of the specified cluster ip Retrieves the IP address of the running cluster logs Returns logs to debug a local Kubernetes cluster update-check Print current and latest version number version Print the version of minikube

restart minikube

[admin@TeamCI-1 ~]$ minikube delete

Deleting “minikube” in kvm2 …

Removed all traces of the “minikube” cluster. #### this is optinal if there’s no config issue [admin@TeamCI-1 ~]$ minikube config set driver kvm2 ! These changes will take effect upon a minikube delete and then a minikube start

[admin@TeamCI-1 ~]$ minikube start –driver=kvm2 minikube v1.12.1 on Centos 7.5.1804 Using the kvm2 driver based on user configuration Starting control plane node minikube in cluster minikube Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) … Found network options:

Preparing Kubernetes v1.18.3 on Docker 19.03.12 …

  • env HTTP_PROXY=http://10.144.1.10:8080
  • env HTTPS_PROXY=http://10.144.1.10:8080
  • env NO_PROXY=localhost,127.0.0.1,10.56.233.135,10.56.233.136,10.56.233.137,10.56.233.138,10.56.233.139,10.56.233.140,10.56.233.175,10.56.233.181,192.168.99.0/24,192.168.39.0/24
  • env NO_PROXY=localhost,127.0.0.1,10.56.233.135,10.56.233.136,10.56.233.137,10.56.233.138,10.56.233.139,10.56.233.140,10.56.233.175,10.56.233.181

Verifying Kubernetes components… Enabled addons: default-storageclass, storage-provisioner Done! kubectl is now configured to use “minikube”

the normal status of minikube

[admin@TeamCI-1 ~]$ minikube status minikube type: Control Plane host: Running kubelet: Running apiserver: Running

minikube cluster ip address

[admin@TeamCI-1 ~]$ minikube ip 192.168.39.190

add this to NO_PROXY env. export NO_PROXY=localhost,127.0.0.1,192.168.99.0/24,192.168.39.0/24

minikub start as not a root in airframe [admin1@allinone ]$ minikube start

ssh inito kube’s own docker image

[admin@TeamCI-1 ~]$ minikube ssh _ _ _ _ ( ) ( ) ___ ___ () __ ()| |/’) _ _ | | __ /’ _ ` _ `\| |/’ _ `\| || , < ( ) ( )| ‘_`\ /’__`\

( ) ( )( )`\()) )( ___/

() () ()()() ()()() ()`\___/’(,__/’`\____)

$ docker list docker: ‘list’ is not a docker command. See ‘docker –help’ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f40bbd6ab6f5 k8s.gcr.io/echoserver “nginx -g ‘daemon of…” 4 minutes ago Up 4 minutes k8s_echoserver_hello-node-7bf657c596-h4d4t_default_a1a613e9-a95a-4764-b642-d5038909acaa_0 276c1d40d7fd k8s.gcr.io/pause:3.2 “/pause” 5 minutes ago Up 5 minutes k8s_POD_hello-node-7bf657c596-h4d4t_default_a1a613e9-a95a-4764-b642-d5038909acaa_0 c1d5c0fed3b6 67da37a9a360 “/coredns -conf /etc…” 37 minutes ago Up 37 minutes k8s_coredns_coredns-66bff467f8-vhc2g_kube-system_66605e1b-9496-4357-a20f-b40fb4f9040a_0 99cabbd8c8d2 k8s.gcr.io/pause:3.2 “/pause” 37 minutes ago Up 37 minutes k8s_POD_coredns-66bff467f8-vhc2g_kube-system_66605e1b-9496-4357-a20f-b40fb4f9040a_0 be078fdc1cc4 4689081edb10 “/storage-provisioner” 37 minutes ago Up 37 minutes k8s_storage-provisioner_storage-provisioner_kube-system_e23c9e99-3314-4305-8b73-a75ee7f43475_0 5e41fe4d9a81 k8s.gcr.io/pause:3.2 “/pause” 37 minutes ago Up 37 minutes k8s_POD_storage-provisioner_kube-system_e23c9e99-3314-4305-8b73-a75ee7f43475_0 47053dec0f02 3439b7546f29 “/usr/local/bin/kube…” 37 minutes ago Up 37 minutes k8s_kube-proxy_kube-proxy-b4tbn_kube-system_645e8602-5a9d-40c5-a331-603587477b8a_0 ef8c5db5a503 k8s.gcr.io/pause:3.2 “/pause” 37 minutes ago Up 37 minutes k8s_POD_kube-proxy-b4tbn_kube-system_645e8602-5a9d-40c5-a331-603587477b8a_0 e6091b092d2d 303ce5db0e90 “etcd –advertise-cl…” 38 minutes ago Up 38 minutes k8s_etcd_etcd-minikube_kube-system_74bf420fdf26a78dc0d1f098bbf3a7d3_0 19f6016083cc 76216c34ed0c “kube-scheduler –au…” 38 minutes ago Up 38 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_dcddbd0cc8c89e2cbf4de5d3cca8769f_0 62fe748b5c1c 7e28efa976bd “kube-apiserver –ad…” 38 minutes ago Up 38 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_a716b5e5aafc5ffc82c175558891ed2a_0 c41589882517 da26705ccb4b “kube-controller-man…” 38 minutes ago Up 38 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_ba963bc1bff8609dc4fc4d359349c120_0 32007b6c1c86 k8s.gcr.io/pause:3.2 “/pause” 38 minutes ago Up 38 minutes k8s_POD_etcd-minikube_kube-system_74bf420fdf26a78dc0d1f098bbf3a7d3_0 352029ca7d3b k8s.gcr.io/pause:3.2 “/pause” 38 minutes ago Up 38 minutes k8s_POD_kube-scheduler-minikube_kube-system_dcddbd0cc8c89e2cbf4de5d3cca8769f_0 dbfafb5903ef k8s.gcr.io/pause:3.2 “/pause” 38 minutes ago Up 38 minutes k8s_POD_kube-controller-manager-minikube_kube-system_ba963bc1bff8609dc4fc4d359349c120_0 24f67716f5a1 k8s.gcr.io/pause:3.2 “/pause” 38 minutes ago Up 38 minutes k8s_POD_kube-apiserver-minikube_kube-system_a716b5e5aafc5ffc82c175558891ed2a_0

alias kubectl as: alias kubectl=”minikube kubectl –”
kube’s pods and docker container

[admin@TeamCI-1 ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE default hello-node-7bf657c596-h4d4t 1/1 Running 0 7m47s kube-system coredns-66bff467f8-vhc2g 1/1 Running 0 40m kube-system etcd-minikube 1/1 Running 0 39m kube-system kube-apiserver-minikube 1/1 Running 0 39m kube-system kube-controller-manager-minikube 1/1 Running 0 39m kube-system kube-proxy-b4tbn 1/1 Running 0 40m kube-system kube-scheduler-minikube 1/1 Running 0 39m kube-system storage-provisioner 1/1 Running 0 40m

minikube run as a root using force option

[root@TeamCI-1 admin]# minikube start –driver=kvm2 –force=true

**

kubectl

kubectl check all the elements

[root@TeamCI-1 ~]# kubectl get all -A [root@TeamCI-1 ~]# kubectl get all –all-namespaces ======================================================== NAMESPACE NAME READY STATUS RESTARTS AGE default pod/kibana-kibana-7586487748-kznkr 0/1 Running 0 37m default pod/logstash-logstash-0 1/1 Running 0 43m elastic-system pod/elastic-operator-0 1/1 Running 0 3m8s kube-system pod/coredns-66bff467f8-5c6j7 1/1 Running 0 3h16m kube-system pod/etcd-minikube 1/1 Running 0 3h16m kube-system pod/kube-apiserver-minikube 1/1 Running 0 3h16m kube-system pod/kube-controller-manager-minikube 1/1 Running 0 3h16m kube-system pod/kube-proxy-qdrf8 1/1 Running 0 3h16m kube-system pod/kube-scheduler-minikube 1/1 Running 0 3h16m kube-system pod/storage-provisioner 1/1 Running 0 3h16m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kibana-kibana ClusterIP 10.97.46.243 <none> 5601/TCP 37m default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h16m default service/logstash-logstash-headless ClusterIP None <none> 9600/TCP 43m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h16m

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 3h16m

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default deployment.apps/kibana-kibana 0/1 1 0 37m kube-system deployment.apps/coredns 1/1 1 1 3h16m

NAMESPACE NAME DESIRED CURRENT READY AGE default replicaset.apps/kibana-kibana-7586487748 1 1 0 37m kube-system replicaset.apps/coredns-66bff467f8 1 1 1 3h16m

NAMESPACE NAME READY AGE default statefulset.apps/logstash-logstash 1/1 43m elastic-system statefulset.apps/elastic-operator 1/1 3h12m =======================================================

delete the statueful set pods

a statefulset could not be deleted by “kubectl delete pod”, it will get the pod restart

ot@TeamCI-1 ~]# kubectl delete statefulset.apps/elastic-operator -n elastic-system statefulset.apps “elastic-operator” deleted

kubectl create deploymnet

[root@TeamCI-1 ~]# kubectl create deployment hello-node –image=k8s.gcr.io/echoserver:1.4 deployment.apps/hello-node created [root@TeamCI-1 ~]# export NO_PROXY=localhost,127.0.0.1,10.56.233.135,10.56.233.136,10.56.233.137,10.56.233.138,10.56.233.139,10.56.233.140,10.56.233.175,10.56.233.181,192.168.99.0/24,192.168.39.0/24 [root@TeamCI-1 ~]# ^C [root@TeamCI-1 ~]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE hello-node 1/1 1 1 98s [root@TeamCI-1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-7bf657c596-tc6ml 1/1 Running 0 119s

get logs of the pod

kubectl logs -f POD-NAME ### get log to check

muliptle containers within one pod:

kubectl describe node calico-node-kpxcr


Controlled By: DaemonSet/calico-node Containers: calico-node: … install-cni


sudo kubectl logs calico-node-kpxcr -n kube-system -c calico-node <podname> <namespace> <container>

describe pod/service/deploymnet to check error if no logs of the pod

kubectl describe pod quickstart-es-default-0

get evnets from

[root@TeamCI-1 ~]# kubectl get events LAST SEEN TYPE REASON OBJECT MESSAGE 2m12s Normal Scheduled pod/hello-node-7bf657c596-tc6ml Successfully assigned default/hello-node-7bf657c596-tc6ml to minikube 2m10s Normal Pulling pod/hello-node-7bf657c596-tc6ml Pulling image “k8s.gcr.io/echoserver:1.4” 73s Normal Pulled pod/hello-node-7bf657c596-tc6ml Successfully pulled image “k8s.gcr.io/echoserver:1.4” 70s Normal Created pod/hello-node-7bf657c596-tc6ml Created container echoserver 69s Normal Started pod/hello-node-7bf657c596-tc6ml Started container echoserver 2m12s Normal SuccessfulCreate replicaset/hello-node-7bf657c596 Created pod: hello-node-7bf657c596-tc6ml 2m12s Normal ScalingReplicaSet deployment/hello-node Scaled up replica set hello-node-7bf657c596 to 1 4m30s Normal NodeHasSufficientMemory node/minikube Node minikube status is now: NodeHasSufficientMemory 4m30s Normal NodeHasNoDiskPressure node/minikube Node minikube status is now: NodeHasNoDiskPressure 4m30s Normal NodeHasSufficientPID node/minikube Node minikube status is now: NodeHasSufficientPID 3m45s Normal RegisteredNode node/minikube Node minikube event: Registered Node minikube in Controller 3m44s Normal Starting node/minikube Starting kubelet. 3m44s Normal NodeHasSufficientMemory node/minikube Node minikube status is now: NodeHasSufficientMemory 3m44s Normal NodeHasNoDiskPressure node/minikube Node minikube status is now: NodeHasNoDiskPressure 3m44s Normal NodeHasSufficientPID node/minikube Node minikube status is now: NodeHasSufficientPID 3m43s Normal NodeAllocatableEnforced node/minikube Updated Node Allocatable limit across pods 3m39s Normal Starting node/minikube Starting kube-proxy. 3m33s Normal NodeReady node/minikube Node minikube status is now: NodeReady [root@TeamCI-1 ~]# kubectl config view apiVersion: v1 clusters:

contexts:

  • context: cluster: minikube user: minikube name: minikube

current-context: minikube kind: Config preferences: {} users:

  • name: minikube user: client-certificate: root.minikube/profiles/minikube/client.crt client-key: root.minikube/profiles/minikube/client.key

expose deployment service port

[root@TeamCI-1 ~]# kubectl expose deployment hello-node –type=LoadBalancer –port=8080 service/hello-node exposed [root@TeamCI-1 ~]# kubectl expose deployment hello-node –type=LoadBalancer –port=8080^C kubectl expose deployment helm-kibana-default-kibana –type=LoadBalancer –name=kiba-expose [root@TeamCI-1 ~]# ^C

foward the port

kubectl port-forward service/quickstart-es-http 9200

kubectl with services/ minikube service detail

[root@TeamCI-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node LoadBalancer 10.103.104.210 <pending> 8080:32015/TCP 22s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m17s [root@TeamCI-1 ~]# minikube service hello-node

Opening service default/hello-node in default browser…
NAMESPACENAMETARGET PORTURL
defaulthello-node8080http://192.168.39.87:32015

START /usr/bin/firefox “http://192.168.39.87:32015

delete service

[root@TeamCI-1 ~]# kubectl delete service hello-node service “hello-node” deleted [root@TeamCI-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m

delete deployment

[root@TeamCI-1 ~]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE hello-node 1/1 1 1 29m [root@TeamCI-1 ~]# kubectl delete deployment hello-node deployment.apps “hello-node” deleted

using yaml to deployment docker container

kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml ===================================================================== apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: frontend labels: app: guestbook spec: selector: matchLabels: app: guestbook tier: frontend replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers:

  • name: php-redis image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m memory: 100Mi env:
    • name: GET_HOSTS_FROM value: dns

    ports:

    • containerPort: 80

================================

using yaml to create services

kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml ========= apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: type: NodePort ports:

  • port: 80

selector: app: guestbook tier: frontend ========================

kubectl get deployment, service and pods

[admin@TeamCI-1 root]$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE frontend 2/2 2 2 9m18s redis-master 1/1 1 1 19m redis-slave 2/2 2 2 13m [admin@TeamCI-1 root]$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend NodePort 10.101.112.250 <none> 80:31543/TCP 5m44s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21m redis-master ClusterIP 10.101.146.172 <none> 6379/TCP 14m redis-slave ClusterIP 10.96.66.7 <none> 6379/TCP 10m [admin@TeamCI-1 root]$ kubectl get pods NAME READY STATUS RESTARTS AGE frontend-56fc5b6b47-cnql6 1/1 Running 0 9m39s frontend-56fc5b6b47-lwzxp 1/1 Running 0 9m39s redis-master-6b54579d85-k5pd6 1/1 Running 0 19m redis-slave-799788557c-2j57d 1/1 Running 0 13m redis-slave-799788557c-pg87w 1/1 Running 0 13m [admin@TeamCI-1 root]$ [admin@TeamCI-1 root]$ kubectl delete frontend error: the server doesn’t have a resource type “frontend” [admin@TeamCI-1 root]$ kubectl delete deployment frontend deployment.apps “frontend” deleted [admin@TeamCI-1 root]$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend NodePort 10.101.112.250 <none> 80:31543/TCP 6m48s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22m redis-master ClusterIP 10.101.146.172 <none> 6379/TCP 15m redis-slave ClusterIP 10.96.66.7 <none> 6379/TCP 11m [admin@TeamCI-1 root]$ kubectl delete service frontend service “frontend” deleted [admin@TeamCI-1 root]$ kubectl get pods NAME READY STATUS RESTARTS AGE redis-master-6b54579d85-k5pd6 1/1 Running 0 21m redis-slave-799788557c-2j57d 1/1 Running 0 14m redis-slave-799788557c-pg87w 1/1 Running 0 14m

kubectl scale

kubectl scale deployment frontend –replicas=5

minikube metrics for top

$ git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git $ kubectl create -f kubernetes-metrics-server/

ot@TeamCI-1 ~]# kubectl top pods logstash-logstash-0 NAME CPU(cores) MEMORY(bytes) logstash-logstash-0 17m 489Mi [root@TeamCI-1 ~]# kubectl top pods elastic-operator-0 -n elastic-system NAME CPU(cores) MEMORY(bytes) elastic-operator-0 6m 16Mi [root@TeamCI-1 ~]# kubectl top node minikube NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% minikube 403m 20% 1994Mi 36%

ot@host-192-168-82-11 ~]# docker exec -it 63ef9af3464d curl -XGET ‘http://localhost:9200/_cluster/health?pretty=true’ { “error” : { “root_cause” : [ { “type” : “master_not_discovered_exception”, “reason” : null } ], “type” : “master_not_discovered_exception”, “reason” : null }, “status” : 503 }

minikube driver=none

[root@host-192-168-82-11 ~]# minikube service kibana-kibana -n default

service default/kibana-kibana has no node port
NAMESPACENAMETARGET PORTURL
defaultkibana-kibanaNo node port

[root@host-192-168-82-11 ~]# kubectl describe service/kibana-kibana Name: kibana-kibana Namespace: default Labels: app=kibana app.kubernetes.io/managed-by=Helm heritage=Helm release=kibana Annotations: meta.helm.sh/release-name: kibana meta.helm.sh/release-namespace: default Selector: app=kibana,release=kibana Type: ClusterIP IP: 10.106.225.62 Port: http 5601/TCP TargetPort: 5601/TCP Endpoints: 172.17.0.4:5601 Session Affinity: None Events: <none>

[root@host-192-168-82-11 ~]# export no_proxy=192.168.82.11,172.17.0.4 [root@host-192-168-82-11 ~]# curl 172.17.0.4:5601 [root@host-192-168-82-11 ~]# curl -L 172.17.0.4:5601 <!DOCTYPE html><html lang=”en”><head><meta charSet=”utf-8”/><meta http-equiv=”X-UA-Compatible” content=”IE=edge,chrome=1”/><meta name=”viewport” content=”width=device-width”/><title>Elastic</title><style>

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 helm install elasticsearch ./helm-charts/elasticsearch –set imageTag=8.0.0-SNAPSHOT 131 helm install elasticsearch ./helm-charts/elasticsearch –set imageTag=8.0.0-SNAPSHOT 134 helm uninstall elasticsearch

kubectl label nodes master nodetype=master [vagrant@master multi]$ kubectl get node master –show-labels

NAME STATUS ROLES AGE VERSION LABELS master Ready master 3d19h v1.18.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=

get pods -o wide ### list the pods running on which node

cat st.yaml ================= apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer ==================== cat > storageClass.yaml << EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer EOF

kubectl create -f storageClass.yaml


[vagrant@master ~]$ kubectl apply -f st1.yaml storageclass.storage.k8s.io/local-storage1 created

cat > persistentVolume.yaml << EOF apiVersion: v1 kind: PersistentVolume metadata: name: my-local-pv spec: capacity: storage: 500Gi accessModes:

  • ReadWriteOnce

persistentVolumeReclaimPolicy: Retain storageClassName: my-local-storage local: path: /mnt/disk/vol1 nodeAffinity: required: nodeSelectorTerms:

  • matchExpressions:
    • key: kubernetes.io/hostname operator: In values:
      • node1

EOF

Note: You might need to exchange the hostname value ¿node1¿ in the nodeAffinity section by the name of the node that matches your environment.

The ¿hostPath¿ we had defined in our last blog post is replaced by the so-called ¿local path¿.

Similar to what we have done in case of a hostPath volume in our last blog post, we need to prepare the volume on node1, before we create the persistent local volume on the master:

DIRNAME=”vol1” mkdir -p /mnt/disk/$DIRNAME chcon -Rt svirt_sandbox_file_t /mnt/disk/$DIRNAME chmod 777 /mnt/disk/$DIRNAME

kubectl create -f persistentVolume.yaml

The output should look like follows:

persistentvolume/my-local-pv created

cat > persistentVolumeClaim.yaml << EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-claim spec: accessModes:

  • ReadWriteOnce

storageClassName: my-local-storage resources: requests: storage: 500Gi EOF

kubectl create -f persistentVolumeClaim.yaml

install kubenet cluster

using kubeadm to install kubenet cluster’s nodes in centos7

make kubectl executing normally after install kubeadm on master node

sudo kubeadm init –pod-network-cidr=10.244.0.0/16

master: ++ kubeadm init –ignore-preflight-errors=SystemVerification –apiserver-advertise-address=192.168.26.10 –pod-network-cidr=10.244.0.0/16 –token lesi2r.bg6wsvtsd24u26qi –token-ttl 0 cat /etc/default/kubelet KUBELET_EXTRA_ARGS=–node-ip=192.168.26.10 systemctl daemon-reload systemctl restart kubelet.service

join into the cluster in work nodes

kubeadm join –ignore-preflight-errors=SystemVerification –discovery-token-unsafe-skip-ca-verification –token lesi2r.bg6wsvtsd24u26qi 192.168.26.10:6443 sudo kubeadm join –ignore-preflight-errors=SystemVerification –discovery-token-unsafe-skip-ca-verification –token nffian.5ggnuftoceqow9zv 192.168.26.10:6443

check join command in master

kubeadm token create –print-join-command W0709 15:14:23.995320 7155 kubelet.go:200] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute ‘docker info -f {{.CgroupDriver}}’: exit status 2 W0709 15:14:24.002418 7155 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join 10.69.151.33:6443 –token gxsxle.smctjsn9l6ppowto –discovery-token-ca-cert-hash sha256:edf7aff939b30ac9fe1e79e3bfefc7c690538db88c65e6d681297da5df64151c ubuntu@node3:~$ ubuntu@node3:~$

restart the whole kubelete clustser

sudo kubeadm reset -f #### on the master node sudo kubeadm init –pod-network-cidr=10.244.0.0/16 ### start on master kubeadm join ….. #### start on work nodes, rejoin the new init process of master

vagrant plugin install vagrant-proxyconf vagrant-libvirt vagrant-sshfs vagrant-reload –plugin-clean-sources –plugin-source https://rubygems.org wget https://releases.hashicorp.com/vagrant/2.2.9/vagrant_2.2.9_x86_64.rpm rpm -ivh vagrant_2.2.9_x86_64.rpm vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-sshfs

mkdir -p $HOME/.kube cp -i etc/kubernetes/admin.conf $HOME.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

install kubenet using vagrant

wget https://releases.hashicorp.com/vagrant/2.2.9/vagrant_2.2.9_x86_64.rpm rpm -ivh vagrant_2.2.9_x86_64.rpm vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-sshfs

proxy configuration for vargrant/docker

vagrant will apply these proxy configurations in the docker,kubernet pods also.

vagrant proxy plugin

vagrant plugin install vagrant-proxyconf

vagrant proxy configuration file

[root@175 libvirt]# cat ~//.vagrant.d/Vagrantfile Vagrant.configure(“2”) do |config| if Vagrant.has_plugin?(“vagrant-proxyconf”) config.proxy.http = “http://10.144.1.10:8080/” config.proxy.https = “http://10.144.1.10:8080/” config.proxy.no_proxy = “localhost,127.0.0.1,192.168.26.10,10.96.0.0/12,10.244.0.0/16,.example.com” end

end

install kubenet cluster(docker-es, kubeadm,kubelet,canal…..) in qemu/kvm cetos7 image

https://technology.amis.nl/2020/04/30/quick-and-easy-a-multi-node-kubernetes-cluster-on-centos-7-qemu-kvm-libvirt/

vagrant command sets

vagrant global-status

show vagrant start virtual machine

vagrant destroy <virma>

vagrant box

vagrant box list vagrant add box

vagrant add a local box file

vagrant box add –provider libvirt –name generic/ubuntu1804 file:///root/cmmp2/libvirt.box

vagrant box remove generic/ubuntu1804

Install QEMU/KVM + libvirt

We are going to use QEMU/KVM and access it through libvirt. Why? Because I want to approach bare metal performance as much as I can and QEMU/KVM does a good job at that. See for example this performance comparison of bare metal vs KVM vs Virtualbox. KVM greatly outperforms Virtualbox and approaches bare metal speeds in quite some tests. I do like the Virtualbox GUI though but I can live with the Virtual Machine Manager. The following will do the trick on CentOS 7 sudo yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install libvirt-devel

Install Vagrant and required plugins

Vagrant is used to create the virtual machines for the master and nodes. Vagrant can easily be installed from here. It even has a CentOS specific RPM which is nice. With Vagrant I¡¯m going to use two plugins. vagrant-libvirt and vagrant-sshfs. The first plugin allows vagrant to manage QEMU/KVM VMs through libvirt. The second plugin will be used for shared folders. Why sshfs? Mainly because libvirt shared folder alternatives such as NFS and 9p were more difficult to set-up and I wanted to be able to provide the same shared storage to all VMs.

wget https://releases.hashicorp.com/vagrant/2.2.9/vagrant_2.2.9_x86_64.rpm rpm -ivh vagrant_2.2.9_x86_64.rpm vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-libvirt vagrant plugin install vagrant-sshfs

Install kubectl

First install kubectl on the host. This is described in detail here.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF sudo yum install -y kubectl

Create the VMs

Execute the following under a normal user (you also installed the Vagrant plugins under this user).

git clone https://github.com/MaartenSmeets/k8s-vagrant-multi-node.git cd k8s-vagrant-multi-node mkdir data/shared

make up -j 3 BOX_OS=centos VAGRANT_DEFAULT_PROVIDER=libvirt NODE_CPUS=2 NODE_COUNT=1 MASTER_CPUS=4 MASTER_MEMORY_SIZE_GB=3 NODE_MEMORY_SIZE_GB=3

make up -j 3 BOX_OS=centos VAGRANT_DEFAULT_PROVIDER=libvirt NODE_CPUS=3 NODE_COUNT=2 MASTER_CPUS=4 MASTER_MEMORY_SIZE_GB=9 NODE_MEMORY_SIZE_GB=6 make up -j 3 VAGRANT_DEFAULT_PROVIDER=libvirt BOX_OS=ubuntu KUBE_NETWORK=calico NODE_CPUS=8 NODE_COUNT=5 MASTER_CPUS=8 MASTER_MEMORY_SIZE_GB=16 NODE_MEMORY_SIZE_GB=40 DISK_SIZE_GB=100

ot@175 cmm]# virsh list Id Name State


3 k8s-vagrant-multi-node_master running 4 k8s-vagrant-multi-node_node2 running 5 k8s-vagrant-multi-node_node1 running

¡° try edit in /etc/ssh/sshd_config PasswordAuthentication yes

clean the env for a new vagrant

vagrant show the vm it started

vagrant global-stauts id name …

ssh into vm by vagrant

vagrant ssh <id in previous cmd>

destroy the vm with id

vagrant destroy <id in status cmd>

==> master: Rsyncing folder: root/k8s-vagrant-multi-node/data/ubuntu-master => /data ==> master: Installing SSHFS client… ==> master: Mounting SSHFS shared folder… ==> master: Mounting folder via SSHFS: /root/k8s-vagrant-multi-node/data/shared => /shared

master: ++ KUBELET_EXTRA_ARGS_FILE=/etc/default/kubelet master: ++ ‘[’ -f /etc/default/kubelet ‘]’ master: ++ echo ‘KUBELET_EXTRA_ARGS=–node-ip=192.168.26.10 ’ master: ++ systemctl daemon-reload master: ++ systemctl restart kubelet.service master: /root master: ++ echo root master: ++ mkdir -p /root.kube master: ++ cp -Rf etc/kubernetes/admin.conf /root.kube/config master: +++ id -u master: +++ id -g master: ++ chown 0:0 root.kube/config

vagrant@master:~$ mkdir .kube vagrant@master:~$ sudo cp /etc/kubernetes/admin.conf .kube/config vagrant@master:~$ sudo chown vagrant:vagrant .kube/config

To remove domain of qemu/kvm

vagrant destroy virsh list –all virsh destroy <THE_MACHINE> virsh undefine <THE_MACHINE> –snapshots-metadata –managed-save virsh vol-list default virsh vol-delete –pool default <THE_VOLUME>

1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn’t tolerate. [vagrant@master ~]$ kubectl describe node master |grep -i taint Taints: node-role.kubernetes.io/master:NoSchedule

tolerations:

  • key: “node-role.kubernetes.io/master” operator: “Exists” effect: “NoSchedul

kubernete help

Basic Commands (Beginner): create Create a resource from a file or from stdin. expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service run Run a particular image on the cluster set Set specific features on objects

Basic Commands (Intermediate): explain Documentation of resources get Display one or many resources edit Edit a resource on the server delete Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a Deployment, ReplicaSet or Replication Controller autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster info top Display Resource (CPU/Memory/Storage) usage. cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes

Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers. auth Inspect authorization

Advanced Commands: diff Diff live version against would-be applied version apply Apply a configuration to a resource by filename or stdin patch Update field(s) of a resource using strategic merge patch replace Replace a resource by filename or stdin wait Experimental: Wait for a specific condition on one or many resources. convert Convert config files between different API versions kustomize Build a kustomization target from a directory or a remote url.

Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash or zsh)

Other Commands: alpha Commands for features in alpha api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of “group/version” config Modify kubeconfig files plugin Provides utilities for interacting with plugins. version Print the client and server version information

apply or delete resources defined in a yaml files

kubectl apply -f <name.yaml> kubectl delete -f <nam.yaml>

label node with label

label node

kubeclt label nodes <nodename> <label>=<labelvalue> kubectl label nodes master nodetype=master

show label

[vagrant@master multi]$ kubectl get node master –show-labels NAME STATUS ROLES AGE VERSION LABELS master Ready master 3d19h v1.18.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=

remove a label

kubeclt label nodes <nodename> <label>- kubectl label nodes master nodetype-

get <resourcetype> <resource name>

kubectl get <resourcetype> <resource name> <namespace> <-o yaml> ### -o yaml will generate all the defined resources kubectl get deployment cmm-operator -n cmm-cd -o yaml >myoperator.yaml

kubectl get PersistentVolume -n cmm-cd

get all the resources in a namespace

kubectl get all -n kube-system =================================== pod/local-path-provisioner-7bf96f54f5-w6v78 1/1 Running 0 15m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41m

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-proxy 6 6 6 6 6 kubernetes.io/os=linux 41m

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE ocal-path-storage deployment.apps/local-path-provisioner 1/1 1 1 15m

NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/consul-server-795879879c 1 1 1 7m7s ======================================= resource type is pod, service daemonset, deployment, replicaset…

get all api-resource in the cluster

ubuntu@node3:~$ kubectl api-resources –verbs=list -n kube-sytem NAME SHORTNAMES APIGROUP NAMESPACED KIND componentstatuses cs false ComponentStatus configmaps cm true ConfigMap endpoints ep true Endpoints events ev true Event limitranges limits true LimitRange namespaces ns false Namespace nodes no false Node persistentvolumeclaims pvc true PersistentVolumeClaim persistentvolumes pv false PersistentVolume pods po true Pod podtemplates true PodTemplate replicationcontrollers rc true ReplicationController resourcequotas quota true ResourceQuota secrets true Secret serviceaccounts sa true ServiceAccount services svc true Service mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io false CustomResourceDefinition apiservices apiregistration.k8s.io false APIService controllerrevisions apps true ControllerRevision daemonsets ds apps true DaemonSet deployments deploy apps true Deployment replicasets rs apps true ReplicaSet statefulsets sts apps true StatefulSet horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler cronjobs cj batch true CronJob jobs batch true Job certificatesigningrequests csr certificates.k8s.io false CertificateSigningRequest cmms cmm.nokia.com true CMM leases coordination.k8s.io true Lease endpointslices discovery.k8s.io true EndpointSlice events ev events.k8s.io true Event ingresses ing extensions true Ingress network-attachment-definitions net-attach-def k8s.cni.cncf.io true NetworkAttachmentDefinition nodes metrics.k8s.io false NodeMetrics pods metrics.k8s.io true PodMetrics ingressclasses networking.k8s.io false IngressClass ingresses ing networking.k8s.io true Ingress networkpolicies netpol networking.k8s.io true NetworkPolicy runtimeclasses node.k8s.io false RuntimeClass poddisruptionbudgets pdb policy true PodDisruptionBudget podsecuritypolicies psp policy false PodSecurityPolicy clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding clusterroles rbac.authorization.k8s.io false ClusterRole rolebindings rbac.authorization.k8s.io true RoleBinding roles rbac.authorization.k8s.io true Role priorityclasses pc scheduling.k8s.io false PriorityClass csidrivers storage.k8s.io false CSIDriver csinodes storage.k8s.io false CSINode storageclasses sc storage.k8s.io false StorageClass volumeattachments storage.k8s.io false VolumeAttachment ippools whereabouts.cni.cncf.io true IPPool ubuntu@node3:~$

get a specific api-resource

kubectl get pv (PersistentVolume) ubuntu@lm905-Master:~$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-036b6f82-9e2a-455a-8a66-a73d0f500d05 5Gi RWO Delete Bound npv-ate/local-path-npv-ate-necc-data-pcmd-ate-qa45g-necc-0 local-path 28h pvc-0d042711-3481-43b1-9ab9-d553227c2984 1Gi RWO Delete Bound npv-ate/local-path-npv-ate-necc-data-kafka-ate-qa45g-necc-1 local-path 28h pvc-0fdf0bef-2a2c-48f3-a6d0-ac8ed72bf67a 1Gi RWO Delete Bound npv-ate/local-path-npv-ate-necc-data-charging-ate-qa45g-necc-2 local-path 28h pvc-128b1cb8-e9a9-4290-82b5-45255ec5598a 1Gi RWO Delete Bound npv-ate/local-path-npv-ate-necc-data-logs-ate-qa45g-necc-2 local-path 28h pvc-2add2f2e-f7e3-4a08-bc3a-48e6e792bf8c 1Gi RWO Delete Bound npv-ate/local-path-npv-ate-ctcs-data-redis-ate-qa45g-ctcs-0 local-path 28h

permission forbidden to get some api-resource

All kubelet resource permission is through clusterrolebindings and clusterroles.

describe <resource type> or <resource api>

kubectl describe ippools -A kubectl describe pods <podname> -n <namesapce>

kubernets taints

add a taint

kubectl taint nodes node1 key=value:NoSchedule

remove a taint

kubectl taint nodes node1 key:NoSchedule-

[vagrant@master multi]$ kubectl describe nodes master |grep -i taint Taints: node-role.kubernetes.io/master:NoSchedule [vagrant@master multi]$ kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule- node/master untainted

select node with taint to schedule the pod on

tolerations:

  • key: “key” operator: “Equal” value: “value” effect: “NoSchedule”

tolerations:

  • key: “key” operator: “Exists” effect: “NoSchedule”

kubectl shell into pods

kubectl exec shell-demo env

Experiment with running other commands. Here are some examples:

kubectl exec shell-demo – ps aux kubectl exec shell-demo – ls / kubectl exec shell-demo – cat /proc/1/mounts

Opening a shell when a Pod has more than one container

If a Pod has more than one container, use –container or -c to specify a container in the kubectl exec command. For example, suppose you have a Pod named my-pod, and the Pod has two containers named main-app and helper-app. The following command would open a shell to the main-app container.

kubectl exec -i -t my-pod –container main-app – /bin/bash

scale in/out the resource

kubectl scale –replicas=2 statefulset.apps/multi-data

ls /sys/class/net kubectl get nodes –selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\(@.type=="InternalIP"\)].address}

kubectl modify the setting of the resource

kubectl could patch the “spec” of the resource

kubectl apply <updated_yaml_file>

this method only available when you apply the yaml files firstly

output of get(get all the fields of a resource)

kubectl get hpa emms -n npv-cmm-23 -o json > <update_yaml_file>

edit the update_yaml_file

kubectl apply -f <update_yaml_file>

kubectl edit

kubectl edit <resourcename> save and exit

kubectl patch

hpa horizontalpodautoscaler

kubectl patch with json

$kubectl get hpa emms -n npv-cmm-23 -o json “spec”: { “maxReplicas”: 8, “minReplicas”: 2, “scaleTargetRef”: { “apiVersion”: “apps/v1”, “kind”: “StatefulSet”, “name”: “qa-23-emms” }, “targetCPUUtilizationPercentage”: 80 }, $kubectl patch hpa amms -n npv-cmm-13 –patch “{"spec":{"minReplicas":3, "maxReplicas":5}}”

patch statefulset for updateStrategy

$kubectl get statefulset qa-13-necc -n npv-cmm-13 -o json “spec”: { “updateStrategy”: { “rollingUpdate”: { “partition”: 0 }, “type”: “RollingUpdate” } }, $kubectl patch statefulset necc -p ‘{“spec”: “updateStrategy”: { “rollingUpdate”: { “partition”: 0 }, “type”: “RollingUpdate” } } }’

patch statefulset for replica

kubectl describe statefulset Replicas: 0 desired | 0 total kubectl patch statefulsets <stateful-set-name> -p ‘{“spec”:{“replicas”:1}}’

kubectl describe statefulset Replicas: 1 desired | 0 total Events: Type Reason Age From Message —- —— —- —- ——- Warning FailedCreate 3s (x9 over 4s) statefulset-controller create Pod cmm-qa45g-paps-0 in StatefulSet cmm-qa45g-paps failed error: Pod “cmm-qa45g-paps-0” is invalid: spec.containers[0].image: Required value

scaling the statefulset with replicas

kubectl scale statefulset web –replicas=1

kubectl patch using patch file

cat patch-file-2.yaml ==================================================== spec: template: spec: containers:

  • name: patch-demo-ctr-3 image: gcr.io/google-samples/node-hello:1.0

======================================================= In your patch command, set type to merge:

kubectl patch deployment patch-demo –patch “$(cat patch-file-2.yaml)”

merge patch with file

kubectl patch deployment patch-demo –type merge –patch “$(cat patch-file-2.yaml)”

delete a node

On Master Node

#Find the node# kubectl get nodes

#Drain it# kubectl drain nodetoberemoved kubectl drain <node_name> [–ignore-daemonsets –force –delete-local-data] node “<node_name>” cordoned

#### delete the one can’t be deleted kubectl delete pod <pod_name> -n=<namespace> –grace-period=0 –force

#Delete it# kubectl delete node nodetoberemoved

On Worker Node (nodetoberemoved).

#Remove join/init setting from node kubeadm reset

or restart kubelet service systemctl restart kubelet > systemctl status kubelet Active: active (running) since …..

on master node

kubectl uncordon <node_name> ### if you didn’t using kubeadm reset kubectl join ### if you kubeadm reset

k8s selector

statefulset define

vagrant@master:~$ kubectl describe gashpc -n gashpc error: the server doesn’t have a resource type “gashpc” vagrant@master:~$ kubectl describe sts gashpc -n gashpc Name: gashpc Namespace: gashpc CreationTimestamp: Tue, 10 Aug 2021 09:18:50 +0000 Selector: app=gashpc Labels: <none> Annotations: <none> Replicas: 1 desired | 1 total Update Strategy: RollingUpdate Partition: 0 Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=gashpc Annotations: k8s.v1.cni.cncf.io/networks: signaling@eth3,cmm-internal@eth2 Containers: gashpc: Image: 192.168.26.10:5000/gashpc:latest Port: <none> Host Port: <none> Environment: <none> Mounts: /gashpc from gashpc-storage (rw) Volumes: <none> Volume Claims: Name: gashpc-storage StorageClass: local-storage Labels: <none> Annotations: <none> Capacity: 100Mi Access Modes: [ReadWriteOnce] Events: <none> vagrant@master:~$

pod labels and annatations

agrant@master:~$ kubectl describe pod gashpc-0 -n gashpc Name: gashpc-0 Namespace: gashpc Priority: 0 Node: node2/192.168.26.12 Start Time: Tue, 10 Aug 2021 09:18:50 +0000 Labels: app=gashpc controller-revision-hash=gashpc-66886df994 statefulset.kubernetes.io/pod-name=gashpc-0 Annotations: k8s.v1.cni.cncf.io/network-status: [{ “name”: “k8s-pod-network”, “ips”: [ “10.244.104.8”, “fdf9:b572:a5a1:be5:3f6f:1943:1663:dfc8” ], “default”: true, “dns”: {} },{ “name”: “signaling”, “interface”: “eth3”, “ips”: [ “172.16.0.22”, “172.16.0.42”, “172.16.0.67”, “172.16.0.92”, “172.16.0.117”, “172.16.0.142”, “172.16.0.167”, “172.16.0.192”, “172.16.0.217”, “172.16.0.242”, “172.16.0.254” ], “mac”: “52:54:00:fb:93:3f”, “dns”: {}

kubectl troubeshoot events or log

kubectl get events -A

kubectl describe events -n <namesapce>

ubuntu@node3:~$ kubectl describe events cmm-qa45g-pap -n cmm-cd Name: cmm-qa45g-paps.16911862cf2fe0fa Namespace: cmm-cd Labels: <none> Annotations: <none> API Version: v1 Count: 9 #####events couldn’t be viewed as timestamp since it will count number of this kind of events Event Time: <nil> First Timestamp: 2021-07-12T16:35:53Z #### only latest timestamp Involved Object: API Version: apps/v1 Kind: StatefulSet Name: cmm-qa45g-paps Namespace: cmm-cd Resource Version: 434451 UID: 2849aa35-fbb0-47cb-a244-1ed33ff7c492 Kind: Event Last Timestamp: 2021-07-12T16:35:54Z Message: create Pod cmm-qa45g-paps-0 in StatefulSet cmm-qa45g-paps failed error: Pod “cmm-qa45g-paps-0” is invalid: spec.containers[0].image: Required value Metadata: Creation Timestamp: 2021-07-12T16:35:53Z Managed Fields: API Version: v1

describe events and filter events name

this will filter events name begin with cmm-qa45g-paps ubuntu@node3:~$ kubectl describe events cmm-qa45g-pap -n cmm-cd Name: cmm-qa45g-paps.16911862cf2fe0fa

journalctl -u kubelet #### get kubelet service log

ubuntu@node3:~$ journalctl -u kubelet |head – Logs begin at Fri 2021-06-18 03:36:04 UTC, end at Mon 2021-07-12 16:17:01 UTC. – Jun 24 03:11:51 node3 systemd[1]: Started kubelet: The Kubernetes Node Agent. Jun 24 03:11:52 node3 systemd[1]: kubelet.service: Current command vanished from the unit file, execution of the command list won’t be resumed. Jun 24 03:11:52 node3 systemd[1]: Stopping kubelet: The Kubernetes Node Agent… Jun 24 03:11:52 node3 systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Jun 24 03:11:52 node3 systemd[1]: Started kubelet: The Kubernetes Node Agent. Jun 24 03:11:52 node3 kubelet[26249]: F0624 03:11:52.457346 26249 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory Jun 24 03:11:52 node3 kubelet[26249]: goroutine 1 [running]: Jun 24 03:11:52 node3 kubelet[26249]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0001a4001, 0xc0001b6840, 0xfb, 0x14d) Jun 24 03:11:52 node3 kubelet[26249]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9

grep all journal log for kube

ubuntu@node3:~$ journalctl |grep kube Jun 29 08:07:45 node3 kubelet[1376]: E0629 08:07:45.535276 1376 pod_workers.go:191] Error syncing pod 617dd051-3094-4a82-8073-9ce89d33b234 (“coredns-f9fd979d6-l5x7f_kube-system(617dd051-3094-4a82-8073-9ce89d33b234)”), skipping: failed to “StartContainer” for “coredns” with CrashLoopBackOff: “back-off 5m0s restarting failed container=coredns pod=coredns-f9fd979d6-l5x7f_kube-system(617dd051-3094-4a82-8073-9ce89d33b234)” Jun 29 08:07:46 node3 kubelet[1376]: I0629 08:07:46.534557 1376 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 259115d5fc44000388cd294c6446569174292610e12d3566d39f9d7f9592433d Jun 29 08:07:46 node3 kubelet[1376]: E0629 08:07:46.535470 1376 pod_workers.go:191] Error syncing pod 79fd6d25-89c3-488c-be6c-0b5269e83eda (“coredns-f9fd979d6-bl6jp_kube-system(79fd6d25-89c3-488c-be6c-0b5269e83eda)”), skipping: failed to “StartContainer” for “coredns” with CrashLoopBackOff: “back-off 5m0s restarting failed container=coredns pod=coredns-f9fd979d6-bl6jp_kube-system(79fd6d25-89c3-488c-be6c-0b5269e83eda)”

trouble shooting crash backoff

when a container reboot or down, it’s very difficult to troubleshoot the reason. maybe resource(memory, cpu) not sufficient

docker events check oom

docker events -f ‘event=oom’ –since ‘10h’

logs of container checking

maybe the logic of the container image itself has something wrong in it. when a docker container has a very time window up, it’s difficult to check logs, and even if you check log, the last time log will be gone. So container logs must be in other place to check later.

get the container in the worknode you want to troubleshoot

sudo docker ps -a b599491403d0 b41efb321922 ”opt/nokia/scripts” 4 minutes ago Exited (1) 3 minutes ago k8s_amms_cmm-qa45g-amms-0_cmm-cd_0d6eed48-636f-4d01-8250-908e1e7c7076_69

docker inspect the container to check volume mount

docker inspect <container-id>

“Type”: “bind”, “Source”: “/var/lib/kubelet/pods/0d6eed48-636f-4d01-8250-908e1e7c7076/volumes/kubernetes.io~empty-dir/shared-log”, ####host directory “Destination”: “/var/log”, ####container dir “Mode”: “”, “RW”: true, “Propagation”: “rprivate” },

here /var/log is the log directory

ls -l /var/lib/kubelet/pods/0d6eed48-636f-4d01-8250-908e1e7c7076/volumes/kubernetes.io~empty-dir/shared-log -rw-r–r– 1 root root 24069 Jul 12 17:21 init_aim_container.log -rw-r–r– 1 root root 376230 Jul 12 17:21 initContainer.log -rw-r–r– 1 root root 76680 Jul 12 17:21 init_platservices_container.log -rw-r–r– 1 root root 31657 Jul 12 17:21 init_service_container.log -rw-r–r– 1 root root 9585 Jul 12 17:21 init_unbound.log -rw-r—– 1 root root 523750 Jul 12 17:21 local.log -rw-r–r– 1 root root 506963 Jul 12 17:21 MMEtk.log -rw-r–r– 1 root root 1000038 Jul 12 14:51 MMEtk.log.old -rw-r–r– 1 root root 34965 Jul 12 17:22 platservices_script.log -rw-r–r– 1 root root 57084 Jul 12 17:21 post_init_container.log -rw-r–r– 1 root root 32305 Jul 12 17:21 probe_server.log -rw-r–r– 1 root root 13845 Jul 12 17:22 restart_pod.log -rw-r–r– 1 root root 1412 Jul 12 17:15 serf_check.log -rw-r–r– 1 root root 86620 Jul 12 17:21 serf_handler.log -rw-r–r– 1 root root 222977 Jul 12 17:22 serf.log -rw——- 1 root root 7705 Jul 12 17:22 sshd.log drwxr-xr-x 2 root root 4096 Jul 12 10:08 unbound -rw-r–r– 1 root root 702 Jul 12 10:08 unbound_monitor.log

everytime the pods restart, the voluems shared-log is always in the same directory since kube only restart the pod, so the pod name not changed. and all the logs in that dir have been appended, you could see the previous logs.

status of node resource checking

vagrant@master:~$ kubectl describe node node1 ======================================================================================= Name: node1 Roles: <none> Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 30 Sep 2021 08:31:07 +0000 Taints: <none> Unschedulable: false Lease: HolderIdentity: node1 AcquireTime: <unset> RenewTime: Fri, 08 Oct 2021 02:31:24 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message —- —— —————– —————— —— ——- MemoryPressure False Fri, 08 Oct 2021 02:29:44 +0000 Fri, 08 Oct 2021 02:24:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 08 Oct 2021 02:29:44 +0000 Fri, 08 Oct 2021 02:24:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 08 Oct 2021 02:29:44 +0000 Fri, 08 Oct 2021 02:24:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 08 Oct 2021 02:29:44 +0000 Fri, 08 Oct 2021 02:24:42 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 192.168.26.11 Hostname: node1 Capacity: cpu: 8 ephemeral-storage: 64800356Ki hugepages-2Mi: 0 memory: 41194568Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 59720007991 hugepages-2Mi: 0 memory: 41092168Ki pods: 110 System Info: Machine ID: 05e6c882f8fe4b9b98be0ea92df84a72 System UUID: d48bb45a-7a5f-4a76-b84a-dbd16fa790b8 Boot ID: a54a699d-2ecc-47d5-8ea3-21334f8b2cba Kernel Version: 5.0.0-20-generic OS Image: Ubuntu 18.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.8 Kubelet Version: v1.19.4 Kube-Proxy Version: v1.19.4 PodCIDR: 10.244.2.0/24 PodCIDRs: 10.244.2.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE ——— —- ———— ———- ————— ————- — kube-system calico-node-56dxc 250m (3%) 0 (0%) 0 (0%) 0 (0%) 7d17h kube-system kube-proxy-kqgj9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d18h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits ——– ——– —— cpu 250m (3%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message —- —— —- —- ——- Normal Starting 7d18h kubelet Starting kubelet. Normal NodeAllocatableEnforced 7d18h kubelet Updated Node Allocatable limit across pods Normal Starting 7d17h kube-proxy Starting kube-proxy. Normal NodeHasSufficientMemory 6m27s (x3 over 7d18h) kubelet Node node1 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m27s (x3 over 7d18h) kubelet Node node1 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m27s (x3 over 7d18h) kubelet Node node1 status is now: NodeHasSufficientPID Normal NodeReady 6m27s (x2 over 7d17h) kubelet Node node1 status is now: NodeReady ====================================================================================== vagrant@master:~$

Failed to create pod sandbox

k8s use a pod sandbox to create real pod in the node. For example, if you create a pod with multiple/single containers to run, then a Pod ========================= vagrant@master:~$ sudo docker ps |grep control bed36a073d66 quay.io/calico/kube-controllers “/usr/bin/kube-contr¿” k8s_calico-kube-controllers_calico-kube-controllers-69f49c8d66-fwfnc_kube-system_8d463cb3-06ed-4f74-87cf-1b1294bdd30e_0 d71bbaa59388 k8s.gcr.io/pause:3.2 “/pause” k8s_POD_calico-kube-controllers-69f49c8d66-fwfnc_kube-system_8d463cb3-06ed-4f74-87cf-1b1294bdd30e_0 =============================

===================================================================== vagrant@master:~$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-69f49c8d66-fwfnc 1/1 Running 0 7d18h ================================================================================

one container is k8s_POD_calico-kube-controllers, the other one is k8s_calico-kube-controllers, if you encounter a “Failed to create pod sandbox”, it means the k8s_POD creation failed. It usually related to resource allocation, cpu, memory, volumes and network interfaces, one of them fail, the sandbox will be failed to create.


network for pod “coredns-f9fd979d6-2d6x6”: networkPlugin cni failed to set up pod “coredns-f9fd979d6-2d6x6_kube-system” network: Multus: [kube-system/coredns-f9fd979d6-2d6x6]: error adding container to network “k8s-pod-network”: delegateAdd: error invoking conflistAdd - “k8s-pod-network”: conflistAdd: error in getting result from AddNetworkList:


etc/cni/net.d is the cni directory, sandbox will find the interface to add, make sure its empty.

network service mesh

A service mesh is a dedicated layer that provides secure service-to-service communication for on-prem, cloud, or multi-cloud infrastructure. Service meshes are often used in conjunction with the microservice architectural pattern, but can provide value in any scenario where complex networking is involved.

Service meshes typically consist of a control plane, and a data plane. The control plane is responsible for securing the mesh, facilitating service discovery, health checking, policy enforcement, and other similar operational concerns. sevice mesh provide a service Level(service name) every application provide a service, and the client request a service, it need to know the ip:port of this service, but in k8s cluster, the ip address is not fixed, the application pods maybe deployed on the other k8s work nodes, then the ip may changed. In this case, dnsname is very important for client interface. It could always request with dnsname but the ip may changed. _________________________________ ____________________________

Application A
sidecar proxy——————————–sidecar proxyApplication B

——————————–| |_____________|_____________|

inside a mesh, if Application A want to initiate a request to Application B for service B, it use sidecar proxy to get the real ip from service mesh control plane, and sidecar proxy A send a ip:port request to Application B, sidecar proxy B could allow the request to Application B or not based on the policy configured in a the service mesh.

A service mesh has a datacenter, it will manage all the services within this mesh. If you want to provide service to the other Applications within the mesh, you should register the service to datacenter. For example, Application B register it’s service B1, dnsname, ip , port to datacenter, Application A use dnsname for initiating, sidecar proxy A will query real ip with dnsname from datacenter, datacenter will tell Application A the real ip of this dnsname.

sidecar proxy is usually a pod, but if we want to get a native way for lantency sensative, you can run consul also on the Application pods _________________________________ _____________________________

Application A
consul agent——————————–consul agentApplicationB

——————————–| |______________|_____________|

take consul as a service mesh example.

consul node and services

consul node means an application which provide services. So consul node is the Application with the sidecar proxy.

[t@cmm-qa45g-necc0 consul]# consul members Node Address Status Type Build Protocol DC Segment necc0 10.244.225.97:8301 alive server 1.9.0 2 dc1 <all> necc1 10.244.199.252:8301 alive server 1.9.0 2 dc1 <all> necc2 10.244.103.147:8301 alive server 1.9.0 2 dc1 <all> alms0 10.244.103.136:8301 alive client 1.9.0 2 dc1 <default> cpps0 10.244.225.94:8301 alive client 1.9.0 2 dc1 <default> cpps1 10.244.40.198:8301 alive client 1.9.0 2 dc1 <default> ctcs0 10.244.199.248:8301 alive client 1.9.0 2 dc1 <default> dbs0 10.244.104.145:8301 alive client 1.9.0 2 dc1 <default> eems0 10.244.103.129:8301 alive client 1.9.0 2 dc1 <default> ipds0 10.244.225.80:8301 alive client 1.9.0 2 dc1 <default> ipds1 10.244.40.199:8301 alive client 1.9.0 2 dc1 <default> lihs0 10.244.103.130:8301 alive client 1.9.0 2 dc1 <default>

[root@cmm-qa45g-necc0 /]# consul catalog nodes -detailed Node ID Address DC TaggedAddresses Meta alms0 c0ee0223-64ec-12c2-abc0-a9f3a7c94d35 10.244.103.136 dc1 lan=10.244.103.136, lan_ipv4=10.244.103.136, wan=10.244.103.136, wan_ipv4=10.244.103.136 consul-network-segment=, vm_type=alms cpps0 8871c059-6bac-2baf-c7b0-793a06feffda 10.244.225.94 dc1 lan=10.244.225.94, lan_ipv4=10.244.225.94, wan=10.244.225.94, wan_ipv4=10.244.225.94 consul-network-segment=, vm_type=cpps cpps1 abadaa28-d4b9-535f-1da6-ba2bf5834c8d 10.244.40.198 dc1 lan=10.244.40.198, lan_ipv4=10.244.40.198, wan=10.244.40.198, wan_ipv4=10.244.40.198 consul-network-segment=, vm_type=cpps ctcs0 24ef4cf9-d60e-52da-23d3-e78d08700c79 10.244.199.248 dc1 lan=10.244.199.248, lan_ipv4=10.244.199.248, wan=10.244.199.248, wan_ipv4=10.244.199.248 consul-network-segment=, vm_type=ctcs dbs0 ca1a9669-4d8f-a597-e0b2-bf931ceab68b 10.244.104.145 dc1 lan=10.244.104.145, lan_ipv4=10.244.104.145, wan=10.244.104.145, wan_ipv4=10.244.104.145 consul-network-segment=, pool_id=0, pool_mem=0, pool_type=DBS, vip=None, vm_type=dbs eems0 784e9ee4-bd23-a5c5-f15e-7000aeb2947f 10.244.103.129 dc1 lan=10.244.103.129, lan_ipv4=10.244.103.129, wan=10.244.103.129, wan_ipv4=10.244.103.129 consul-network-segment=, vm_type=eems ipds0 bd108456-0b25-2f44-47df-d159df92c469 10.244.225.80 dc1 lan=10.244.225.80, lan_ipv4=10.244.225.80, wan=10.244.225.80, wan_ipv4=10.244.225.80 consul-network-segment=, vm_type=ipds ipds1 b1088d2b-ed2c-cab7-d423-ad17b6975aea 10.244.40.199 dc1 lan=10.244.40.199, lan_ipv4=10.244.40.199, wan=10.244.40.199, wan_ipv4=10.244.40.199 consul-network-segment=, vm_type=ipds lihs0 53ed2d2e-8aa1-41fb-a4ab-cb8391e19965 10.244.103.130 dc1 lan=10.244.103.130, lan_ipv4=10.244.103.130, wan=10.244.103.130, wan_ipv4=10.244.103.130 consul-network-segment=, vm_type=lihs necc0 72a965fe-a7ba-c014-ab54-8d234f8342ce 10.244.225.97 dc1 lan=10.244.225.97, lan_ipv4=10.244.225.97, wan=10.244.225.97, wan_ipv4=10.244.225.97 consul-network-segment=, vm_type=necc necc1 f77bea95-fa71-4e2b-4aa1-1b17d78f879d 10.244.199.252 dc1 lan=10.244.199.252, lan_ipv4=10.244.199.252, wan=10.244.199.252, wan_ipv4=10.244.199.252 consul-network-segment=, vm_type=necc necc2 f068725f-eb04-5aad-d464-4540f5ce6066 10.244.103.147 dc1 lan=10.244.103.147, lan_ipv4=10.244.103.147, wan=10.244.103.147, wan_ipv4=10.244.103.147 consul-network-segment=, vm_type=necc

Starts the Consul agent and runs until an interrupt is received. The agent represents a single node in a cluster. In the application pod, start consul to make a service mesh /usr/bin/consul agent -config-dir /etc/consul =================== frontend sentinel mode tcp bind *:26378 default_backend sentinel

backend sentinel mode tcp{{range service “sentinel”}} server {{.Node}}{.Address}:{{.Port}} {{.Address}}:{{.Port}} {{end}} ============================================================== it means: service sentinel server necc0_10.244.225.97:26378 necc0_10.244.225.97:26378

###list how many services regitered in the datacenter, there are 3 tags 0,1,2 [root@cmm-qa45g-necc0 consul]# consul catalog services -tags sentinel 0,1,2 vault 0,1,2

consul dns service

[root@cmm-qa45g-necc0 consul]# netstat -anp |grep -w 53 tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1676/consul udp 0 0 127.0.0.1:53 0.0.0.0:* 1676/consul [root@cmm-qa45g-necc0 consul]# ps aux |grep consul root 1676 4.1 0.1 819652 99256 ? Sl Aug25 1389:53 /usr/bin/consul agent -config-dir /etc/consul

query consul nodes

[root@cmm-qa45g-necc0 consul]# dig alms0.node.consul ;; ANSWER SECTION: alms0.node.consul. 0 IN A 10.244.103.136

query consul service

[root@cmm-qa45g-necc0 consul]# dig sentinel.service.consul ;; ANSWER SECTION: sentinel.service.consul. 0 IN A 10.244.199.252 sentinel.service.consul. 0 IN A 10.244.225.97 sentinel.service.consul. 0 IN A 10.244.103.147

consul acl (access )

Consul ACL Policy List Command: consul acl policy list

istio

istio deployment

kube api no proxy setttings

vim /etc/kubernetes/manifests/kube-apiserver.yaml no_proxy=…,istio-sidecar-injector.istio-system.svc

Volume setup

MountVolume.SetUp failed for volume “istio-certs” : failed to sync secret cache: timed out waiting for the condition kubectl delete crd policies.authentication.istio.io

apiresource gateway destinationrule virtualservice

agrant@master:~/istio-1.4.2$ kubectl apply -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting gateway.networking.istio.io/tcp-echo-gateway created destinationrule.networking.istio.io/tcp-echo-destination created virtualservice.networking.istio.io/tcp-echo created

vagrant@master:~/istio-1.4.2$ sudo kubectl api-resources -n istio-io-tcp-traffic-shifting |grep -i gateway gateways gw networking.istio.io/v1alpha3 true Gateway vagrant@master:~/istio-1.4.2$ sudo kubectl get gateways -n istio-io-tcp-traffic-shifting NAME AGE tcp-echo-gateway 49m vagrant@master:~/istio-1.4.2$ sudo kubectl api-resources -n istio-io-tcp-traffic-shifting |grep -i destination destinationrules dr networking.istio.io/v1alpha3 true DestinationRule vagrant@master:~/istio-1.4.2$ sudo kubectl get dr -n istio-io-tcp-traffic-shifting NAME HOST AGE tcp-echo-destination tcp-echo 50m vagrant@master:~/istio-1.4.2$ sudo kubectl api-resources -n istio-io-tcp-traffic-shifting |grep -i virtualservice virtualservices vs networking.istio.io/v1alpha3 true VirtualService vagrant@master:~/istio-1.4.2$ sudo kubectl get vs -n istio-io-tcp-traffic-shifting NAME GATEWAYS HOSTS AGE tcp-echo [“tcp-echo-gateway”] [“*”] 51m

virtualservice linked with gateway and service version(subnet)

kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting spec: gateways: - tcp-echo-gateway hosts: - ‘*’ tcp:

  • match:
    • port: 31400

    route:

    • destination: host: tcp-echo port: number: 9000 subset: v1

gateways in istio_ingressgateway pod bind port 31400 for forwording

vagrant@master:~$ sudo kubectl get gateways tcp-echo-gateway -n istio-io-tcp-traffic-shifting -o yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway kubectl.kubernetes.io/last-applied-configuration: | {“apiVersion”:”networking.istio.io/v1alpha3”,”kind”:”Gateway”,”metadata”:{“annotations”:{},”name”:”tcp-echo-gateway”,”namespace”:”istio-io-tcp-traffic-shifting”}, “spec”:{“selector”:{“istio”:”ingressgateway”},”servers”:[{“hosts”:[“*”],”port”:{“name”:”tcp”,”number”:31400,”protocol”:”TCP”}}]}}

vagrant@master:~$ kubectl exec -n istio-system istio-ingressgateway-864fd8ffc8-m6vjl – netstat -an Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:31400 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:15090 0.0.0.0:* LISTEN

Destinationrule for host tcp-echo both v1 and v2 are OK

vagrant@master:~$ sudo kubectl get DestinationRule -n istio-io-tcp-traffic-shifting NAME HOST AGE tcp-echo-destination tcp-echo 130m vagrant@master:~$ sudo kubectl get DestinationRule -n istio-io-tcp-traffic-shifting -o yaml apiVersion: v1 items:

  • apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule kubectl.kubernetes.io/last-applied-configuration: | {“apiVersion”:”networking.istio.io/v1alpha3”,”kind”:”DestinationRule”,”metadata”:{“annotations”:{},”name”:”tcp-echo-destination”,”namespace”:”istio-io-tcp-traffic-shifting”},

“spec”:{“host”:”tcp-echo”,”subsets”:[{“labels”:{“version”:”v1”},”name”:”v1”},{“labels”:{“version”:”v2”},”name”:”v2”}]}}

in host sleep

map the isotio ingressgateway’s external ip and port for outside accessing

INGRESS_HOST is the ingress_gateway node ip vagrant@master:~$ kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath=’{.items[0].status.hostIP}’

vagrant@master:~$ sudo kubectl get service istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.99.182.159 <pending> 15020:30786/TCP,31400:30776/TCP,80:30482/TCP,443:30372/TCP,15029:30804/TCP,15030:32098/TCP,15031:30561/TCP,15032:31985/TCP,15443:31416/TCP 8d

expose the port 30776 as nodePort for accessing from INGRESS_HOST:30776, and it will go to port 31400 in isotio_ingress_gateway pods vagrant@master:~$ sudo kubectl get service istio-ingressgateway -n istio-system -o yaml name: tcp nodePort: 30776 port: 31400 protocol: TCP targetPort: 31400

send requests from outside k8s

vagrant@master:~$ kubectl exec sleep-854565cb79-fs8jr -n istio-io-tcp-traffic-shifting -c sleep – sh -c “(date; sleep 1) | nc $INGRESS_HOST 30776” one Tue Jan 5 08:46:43 UTC 2021

#### if modify the virtual service tcp-echo configuration’s subnet from v1 to v2, then vagrant@master:~$ kubectl exec sleep-854565cb79-fs8jr -n istio-io-tcp-traffic-shifting -c sleep – sh -c “(date; sleep 1) | nc $INGRESS_HOST 30776” two Tue Jan 5 08:46:43 UTC 2021

add weight to two differnet micro service version

kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting >virtualservice_tcp-echo.yaml_v2

edit file virtualservice_tcp-echo.yaml_v2: apiVersion: networking.istio.io/v1beta1 kind: VirtualService … spec: … tcp:

  • match:
    • port: 31400

    route:

    • destination: host: tcp-echo port: number: 9000 subset: v1 weight: 80
    • destination: host: tcp-echo port: number: 9000 subset: v2 weight: 20

appply it as follow: kubectl apply -f virtualservice_tcp-echo.yaml

vagrant@master:~$ for i in {1..20}; do kubectl exec “$(kubectl get pod -l app=sleep -n istio-io-tcp-traffic-shifting -o jsonpath={.items..metadata.name})” -c sleep -n istio-io-tcp-traffic-shifting – sh -c “(date; sleep 1) | nc $INGRESS_HOST $TCP_INGRESS_PORT”; done one Tue Jan 5 09:04:36 UTC 2021 one Tue Jan 5 09:04:38 UTC 2021 one Tue Jan 5 09:04:39 UTC 2021 one Tue Jan 5 09:04:41 UTC 2021 one Tue Jan 5 09:04:42 UTC 2021 two Tue Jan 5 09:04:44 UTC 2021 one Tue Jan 5 09:04:45 UTC 2021 one Tue Jan 5 09:04:47 UTC 2021 one Tue Jan 5 09:04:48 UTC 2021 one Tue Jan 5 09:04:50 UTC 2021 one Tue Jan 5 09:04:51 UTC 2021 one Tue Jan 5 09:04:53 UTC 2021 two Tue Jan 5 09:04:54 UTC 2021 one Tue Jan 5 09:04:56 UTC 2021 one Tue Jan 5 09:04:57 UTC 2021 two Tue Jan 5 09:04:59 UTC 2021 one Tue Jan 5 09:05:00 UTC 2021 one Tue Jan 5 09:05:02 UTC 2021 one Tue Jan 5 09:05:04 UTC 2021 one Tue Jan 5 09:05:05 UTC 2021

istio mesh

actual proxy is in envoy, and configuration proxy is in pilot

get an overview of your mesh in pods

ubuntu@master-cmm23-1:~$ istioctl proxy-status NAME CDS LDS EDS RDS PILOT VERSION istio-egressgateway-74c46fc97c-tbjhf.istio-system SYNCED SYNCED SYNCED NOT SENT istio-pilot-75786cc7b5-tbdvk 1.5.0 istio-ingressgateway-6966dc8c66-ks4fv.istio-system SYNCED SYNCED SYNCED NOT SENT istio-pilot-75786cc7b5-tbdvk 1.5.0

Retrieve diffs between Envoy and Istiod(Pilot in control pannel)

pilot VS. envoy ubuntu@master-cmm23-1:~$ istioctl proxy-status istio-ingressgateway-6966dc8c66-ks4fv.istio-system — Pilot Clusters +++ Envoy Clusters @@ -24,15 +24,15 @@ “commonTlsContext”: { “tlsCertificates”: [ { “certificateChain”: { “filename”: “/etc/certs/cert-chain.pem” }, “privateKey”: {

  • “filename”: “/etc/certs/key.pem”
  • “filename”: “[redacted]”

Listeners Match Routes Match


Here you can see that the listeners and routes match but the clusters are out of sync.

proxy configuration for clusters, listeners, routes of pods

vagrant@master:~/istio-1.4.2$ bin/istioctl proxy-status NAME CDS LDS EDS RDS PILOT VERSION sleep-854565cb79-fs8jr.istio-io-tcp-traffic-shifting SYNCED SYNCED SYNCED SYNCED istio-pilot-64f794cf58-jqmq5 1.4.2 tcp-echo-v1-6b459455b6-5pmf5.istio-io-tcp-traffic-shifting SYNCED SYNCED SYNCED SYNCED istio-pilot-64f794cf58-jqmq5 1.4.2 tcp-echo-v2-7bbc85bff5-tzvvn.istio-io-tcp-traffic-shifting SYNCED SYNCED SYNCED SYNCED istio-pilot-64f794cf58-jqmq5 1.4.2

vagrant@master:~/istio-1.4.2$ bin/istioctl proxy-status istio-ingressgateway-864fd8ffc8-m6vjl.istio-system Clusters Match Listeners Match Routes Match (RDS last loaded at Mon, 28 Dec 2020 06:27:37 UTC) vagrant@master:~/istio-1.4.2$

get cluster proxy config

vagrant@master:~/istio-1.4.2$ bin/istioctl proxy-config cluster sleep-854565cb79-fs8jr.istio-io-tcp-traffic-shifting istio-ingressgateway.istio-system.svc.cluster.local 31400 - outbound EDS sleep.istio-io-tcp-traffic-shifting.svc.cluster.local 80 - outbound EDS sleep.istio-io-tcp-traffic-shifting.svc.cluster.local 80 http inbound STATIC tcp-echo.istio-io-tcp-traffic-shifting.svc.cluster.local 9000 - outbound EDS tcp-echo.istio-io-tcp-traffic-shifting.svc.cluster.local 9000 v1 outbound EDS tcp-echo.istio-io-tcp-traffic-shifting.svc.cluster.local 9000 v2 outbound EDS

vagrant@master:~/istio-1.4.2$ bin/istioctl proxy-config cluster 7tcp-echo-v1-6b459455b6-5pmf5.istio-io-tcp-traffic-shifting sleep.istio-io-tcp-traffic-shifting.svc.cluster.local 80 - outbound EDS tcp-echo.istio-io-tcp-traffic-shifting.svc.cluster.local 9000 - outbound EDS tcp-echo.istio-io-tcp-traffic-shifting.svc.cluster.local 9000 tcp inbound STATIC tcp-echo.istio-io-tcp-traffic-shifting.svc.cluster.local 9000 v1 outbound EDS tcp-echo.istio-io-tcp-traffic-shifting.svc.cluster.local 9000 v2 outbound EDS

get listener proxy config

vagrant@master:~/istio-1.4.2$ bin/istioctl proxy-config listener tcp-echo-v1-6b459455b6-5pmf5.istio-io-tcp-traffic-shifting ADDRESS PORT TYPE 10.244.219.93 9000 TCP ## netcat tcp echo server pods fd76:cbb9:e435:db4d:f2f2:402d:3b99:a19d 9000 TCP fe80::e84e:69ff:fef7:5fd7 9000 TCP 10.244.219.93 15020 TCP 10.99.182.159 31400 TCP # istio_ingress_gateway pods

get routes

ingressgateway config only product page routes has match and route two elements “match”: { “path”: “/login”,

vagrant@master:~/istio-1.4.2$ bin/istioctl proxy-config routes istio-ingressgateway-864fd8ffc8-m6vjl.istio-system -o json [ { “name”: “http.80”, “virtualHosts”: [ { “name”: “*:80”, “domains”: [ “*”, “*:80” ], “routes”: [ { “match”: { “path”: “/productpage”, “caseSensitive”: true }, “route”: { “cluster”: “outbound|9080||productpage.default.svc.cluster.local”, “timeout”: “0s”, “retryPolicy”: { “retryOn”: “connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes”, “numRetries”: 2, “retryHostPredicate”: [ { “name”: “envoy.retry_host_predicates.previous_hosts” } ], “hostSelectionRetryMaxAttempts”: “5”, “retriableStatusCodes”: [ 503 ] }, “maxGrpcTimeout”: “0s” }, “metadata”: { “filterMetadata”: { “istio”: { “config”: “/apis/networking/v1alpha3/namespaces/default/virtual-service/bookinfo” } } }, “decorator”: { “operation”: “productpage.default.svc.cluster.local:9080/productpage” }, “typedPerFilterConfig”: { “mixer”: { “@type”: “type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig”, “disableCheckCalls”: true, “mixerAttributes”: { “attributes”: { “destination.service.host”: { “stringValue”: “productpage.default.svc.cluster.local” }, “destination.service.name”: { “stringValue”: “productpage” }, “destination.service.namespace”: { “stringValue”: “default” }, “destination.service.uid”: { “stringValue”: “istio://default/services/productpage” } } },

[ { “virtualHosts”: [ { “name”: “backend”, “domains”: [ “*” ], “routes”: [ { “match”: { “prefix”: “/stats/prometheus” }, “route”: { “cluster”: “prometheus_stats” } } ] } ] } ]

deploy bookinfo demo

url host and port

kubectl get service2 -n istio-system istio-ingressgateway LoadBalancer 10.99.182.159 <pending> 15020:30786/TCP,31400:30776/TCP,80:30482/TCP,443:30372/TCP,15029:30804/TCP,15030:32098/TCP,15031:30561/TCP,15032:31985/TCP,15443:31416/TCP 21d 80:30482

kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath=’{.items[0].status.hostIP}’ 192.168.26.10

so outside the pods the curl url is http://192.168.26.10:30482/productpage

istio tcpdump

enable tcpdump in the istio-sidecar-injector

kubectl get configmap istio-sidecar-injector -n istio-system -o yaml save the yaml file and modfify this

values.global.proxy.privileged=true ##### comment out if {{- if .Values.global.proxy.privileged }} privileged: true {{- end }} #### save and kubectl apply -f file

kubectl exec istio-ingressgateway-864fd8ffc8-m6vjl -n istio-system – sudo tcpdump port 9080 -A #################################################### vagrant@master:~/istio-1.4.2$ kubectl exec productpage-v1-764fd8c446-dggtm -c istio-proxy – sudo tcpdump port 9080 -A tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 06:23:51.393391 IP 10-244-219-80.istio-ingressgateway.istio-system.svc.cluster.local.50194 > productpage-v1-764fd8c446-dggtm.9080: Flags [S], seq 2668994244, win 64400, options [mss 1400,sackOK,TS val 3821962340 ecr 0,nop,wscale 7], length 0 06:23:51.393428 IP productpage-v1-764fd8c446-dggtm.9080 > 10-244-219-80.istio-ingressgateway.istio-system.svc.cluster.local.50194: Flags [S.], seq 451943903, ack 2668994245, win 65236, options [mss 1400,sackOK,TS val 2976544916 ecr 3821962340,nop,wscale 7], length 0 06:23:51.393467 IP 10-244-219-80.istio-ingressgateway.istio-system.svc.cluster.local.50194 > productpage-v1-764fd8c446-dggtm.9080: Flags [.], ack 1, win 504, options [nop,nop,TS val 3821962340 ecr 2976544916], length 0 06:23:51.393633 IP 10-244-219-80.istio-ingressgateway.istio-system.svc.cluster.local.50194 > productpage-v1-764fd8c446-dggtm.9080: Flags [P.], seq 1:859, ack 1, win 504, options [nop,nop,TS val 3821962341 ecr 2976544916], length 858 …e.jx.GET /productpage HTTP/1.1 host: 192.168.26.10:30482 user-agent: curl/7.29.0 accept: / x-forwarded-for: 192.168.121.73 x-forwarded-proto: http x-envoy-internal: true x-request-id: bf9e3f47-f9b1-4a8e-972e-37a637462d40 x-envoy-decorator-operation: productpage.default.svc.cluster.local:9080/productpage x-istio-attributes: CikKGGRlc3RpbmF0aW9uLnNlcnZpY2UubmFtZRINEgtwcm9kdWN0cGFnZQoqCh1kZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWVzcGFjZRIJEgdkZWZhdWx0Ck8KCnNvdXJjZS51aWQSQRI/a3ViZXJuZXRlczovL2lzdGlvLWluZ3Jlc3NnYXRld2F5LTg2NGZkOGZmYzgtbTZ2amwuaXN0aW8tc3lzdGVtCkMKGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBInEiVwcm9kdWN0cGFnZS5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsCkEKF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiYSJGlzdGlvOi8vZGVmYXVsdC9zZXJ2aWNlcy9wcm9kdWN0cGFnZQ== x-b3-traceid: 50d1b0d7a9e91186ca141028260defb9 x-b3-spanid: ca141028260defb9 x-b3-sampled: 0 content-length: 0

06:23:51.393646 IP productpage-v1-764fd8c446-dggtm.9080 > 10-244-219-80.istio-ingressgateway.istio-system.svc.cluster.local.50194: Flags [.], ack 859, win 503, options [nop,nop,TS val 2976544917 ecr 3821962341], length 0 06:23:51.408163 IP productpage-v1-764fd8c446-dggtm.43150 > 10-244-219-85.details.default.svc.cluster.local.9080: Flags [P.], seq 2659335201:2659336016, ack 958487560, win 502, options [nop,nop,TS val 1059073370 ecr 2263891484], length 815 ? -Z..>.GET /details/0 HTTP/1.1 host: details:9080 user-agent: curl/7.29.0 accept-encoding: gzip, deflate accept: / x-request-id: bf9e3f47-f9b1-4a8e-972e-37a637462d40 x-forwarded-proto: http x-envoy-decorator-operation: details.default.svc.cluster.local:9080/* x-istio-attributes: Cj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFkZXRhaWxzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwKPQoXZGVzdGluYXRpb24uc2VydmljZS51aWQSIhIgaXN0aW86Ly9kZWZhdWx0L3NlcnZpY2VzL2RldGFpbHMKJQoYZGVzdGluYXRpb24uc2VydmljZS5uYW1lEgkSB2RldGFpbHMKKgodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USCRIHZGVmYXVsdApECgpzb3VyY2UudWlkEjYSNGt1YmVybmV0ZXM6Ly9wcm9kdWN0cGFnZS12MS03NjRmZDhjNDQ2LWRnZ3RtLmRlZmF1bHQ= x-b3-traceid: 50d1b0d7a9e91186ca141028260defb9 x-b3-spanid: 143d6f81dac352f5 x-b3-parentspanid: 1c9123d2f7e063d3 x-b3-sampled: 0 content-length: 0

06:23:51.426026 IP 10-244-219-85.details.default.svc.cluster.local.9080 > productpage-v1-764fd8c446-dggtm.43150: Flags [P.], seq 1:344, ack 815, win 502, options [nop,nop,TS val 2263920313 ecr 1059073370], length 343 E…73@.?.6. ..U ..V#x..9!...?P……….. ….? -ZHTTP/1.1 200 OK content-type: application/json server: istio-envoy date: Thu, 21 Jan 2021 06:23:51 GMT content-length: 178 x-envoy-upstream-service-time: 15

{“id”:0,”author”:”William Shakespeare”,”year”:1595,”type”:”paperback”,”pages”:200,”publisher”:”PublisherA”,”language”:”English”,”ISBN-10”:”1234567890”,”ISBN-13”:”123-1234567890”} 06:23:51.426052 IP productpage-v1-764fd8c446-dggtm.43150 > 10-244-219-85.details.default.svc.cluster.local.9080: Flags [.], ack 344, win 502, options [nop,nop,TS val 1059073388 ecr 2263920313], length 0 E..4..@.@.., ..V ..U..#x..?P9!]_……….. ? -l…. 06:23:51.445888 IP productpage-v1-764fd8c446-dggtm.45856 > 10-244-219-90.reviews.default.svc.cluster.local.9080: Flags [P.], seq 90287540:90288355, ack 1335629575, win 502, options [nop,nop,TS val 347072225 ecr 3898393708], length 815 E..c.:@.@… ..V ..Z. #x.a..O………….. …...lGET /reviews/0 HTTP/1.1 host: reviews:9080 user-agent: curl/7.29.0 accept-encoding: gzip, deflate accept: / x-request-id: bf9e3f47-f9b1-4a8e-972e-37a637462d40 x-forwarded-proto: http x-envoy-decorator-operation: reviews.default.svc.cluster.local:9080/* x-istio-attributes: Cj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFyZXZpZXdzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwKPQoXZGVzdGluYXRpb24uc2VydmljZS51aWQSIhIgaXN0aW86Ly9kZWZhdWx0L3NlcnZpY2VzL3Jldmlld3MKJQoYZGVzdGluYXRpb24uc2VydmljZS5uYW1lEgkSB3Jldmlld3MKKgodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USCRIHZGVmYXVsdApECgpzb3VyY2UudWlkEjYSNGt1YmVybmV0ZXM6Ly9wcm9kdWN0cGFnZS12MS03NjRmZDhjNDQ2LWRnZ3RtLmRlZmF1bHQ= x-b3-traceid: 50d1b0d7a9e91186ca141028260defb9 x-b3-spanid: 1ed7cd05a3363b5c x-b3-parentspanid: 1c9123d2f7e063d3 x-b3-sampled: 0 content-length: 0

06:23:51.461957 IP 10-244-219-90.reviews.default.svc.cluster.local.9080 > productpage-v1-764fd8c446-dggtm.45856: Flags [P.], seq 1:513, ack 815, win 502, options [nop,nop,TS val 3898820673 ecr 347072225], length 512 E..4..@.?… ..Z ..V#x. O….a…………. .cHA….HTTP/1.1 200 OK x-powered-by: Servlet/3.1 content-type: application/json date: Thu, 21 Jan 2021 06:23:51 GMT content-language: en-US content-length: 295 x-envoy-upstream-service-time: 15 server: istio-envoy

{“id”: “0”,”reviews”: [{ “reviewer”: “Reviewer1”, “text”: “An extremely entertaining play by Shakespeare. The slapstick humour is refreshing!”},{ “reviewer”: “Reviewer2”, “text”: “Absolutely fun and entertaining. The play lacks thematic depth when compared to other plays by Shakespeare.”}]} 06:23:51.461979 IP productpage-v1-764fd8c446-dggtm.45856 > 10-244-219-90.reviews.default.svc.cluster.local.9080: Flags [.], ack 513, win 502, options [nop,nop,TS val 347072241 ecr 3898820673], length 0 E..4.;@.@… ..V ..Z. #x.a..O………….. …..cHA 06:23:51.471039 IP productpage-v1-764fd8c446-dggtm.9080 > 10-244-219-80.istio-ingressgateway.istio-system.svc.cluster.local.50194: Flags [P.], seq 1:4358, ack 859, win 503, options [nop,nop,TS val 2976544994 ecr 3821962341], length 4357 E..9.)@.@… ########################################################

istio feature

Request Routing

http header request match to route to different version


apiVersion: networking.istio.io/v1beta1 kind: VirtualService … spec: hosts:

  • reviews

http:

  • match:
    • headers: end-user: exact: jason

    route:

    • destination: host: reviews subset: v2
  • route:
    • destination: host: reviews subset: v1

Fault Injection

add delay time


$ kubectl get virtualservice ratings -o yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService … spec: hosts:

  • ratings

http:

  • fault: delay: fixedDelay: 7s percentage: value: 100 match:
    • headers: end-user: exact: jason

    route:

    • destination: host: ratings subset: v1
  • route:
    • destination: host: ratings subset: v1

Reqeust Timeout

A timeout for HTTP requests can be specified using the timeout field of the route rule. By default, the request timeout is disabled, but in this task you override the reviews service timeout to 1 second


$ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts:

  • reviews

http:

  • route:
    • destination: host: reviews subset: v2

    timeout: 0.5s


client will wait for 0.5s for response, if it exceed 0.5s, all the communication will drop.

http traffic shifting

add weight to different routes


$ kubectl get virtualservice reviews -o yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService … spec: hosts:

  • reviews

http:

  • route:
    • destination: host: reviews subset: v1 weight: 50
    • destination: host: reviews subset: v3 weight: 50

tcp traffic shifting

$ kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting apiVersion: networking.istio.io/v1beta1 kind: VirtualService … spec: … tcp:

  • match:
    • port: 31400

    route:

    • destination: host: tcp-echo port: number: 9000 subset: v1 weight: 80
    • destination: host: tcp-echo port: number: 9000 subset: v2 weight: 20

circuit breaking

at one time, only one connection allowed, Destination Rule ============================================================ $ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: httpbin spec: host: httpbin trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 outlierDetection: consecutiveErrors: 1 interval: 1s baseEjectionTime: 3m maxEjectionPercent: 100 EOF =================================================== In the DestinationRule settings, you specified maxConnections: 1 and http1MaxPendingRequests: 1. These rules indicate that if you exceed more than one connection and request concurrently, you should see some failures when the istio-proxy opens the circuit for further requests and connections. when concurretly http request sent, some return error code 503


Code 200 : 17 (85.0 %) Code 503 : 3 (15.0 %)


Prometheus

Prometheus monitoring is quickly becoming the Docker and Kubernetes monitoring tool to use. To deploy a Prometheus server and metrics exporters, setup kube-state-metrics, pull and collect those metrics, and configure alerts with Alertmanager and dashboards with Grafana.

Two technology shifts took place that created a need for a new monitoring framework: DevOps culture: Prior to the emergence of DevOps, monitoring consisted of hosts, networks, and services. Now, developers need the ability to easily integrate app and business related metrics as an organic part of the infrastructure, because they are more involved in the CI/CD pipeline and can do a lot of operations-debugging on their own. Monitoring needed to be democratized, made more accessible, and cover additional layers of the stack.

Containers and Kubernetes: Container-based infrastructures are radically changing how we do logging, debugging, high-availability, etc., and monitoring is not an exception. Now you have a huge number of volatile software entities, services, virtual network addresses, and exposed metrics that suddenly appear or vanish. Traditional monitoring tools are not designed to handle this.

install prometheus

create namespace

kubectl create namespace prometheus

Add the prometheus-community chart repository.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Deploy Prometheus.

helm upgrade -i prometheus prometheus-community/prometheus \ –namespace prometheus \ –set alertmanager.persistentVolume.storageClass=”gp2”,server.persistentVolume.storageClass=”gp2”

docker ‘s registry 2 container for local image pull from insted of public docker hub

deploying the registry container

Docker Registry This image contains an implementation of the Docker Registry HTTP API V2 for use with Docker 1.6+. See github.com/docker/distribution for more details about what it is.

Run a local registry: Quick Version $ docker run -d -p 5000:5000 –restart always –name registry registry:2

push image to local container <hostname::5000

$ docker pull ubuntu #### pull an image from the docker hub in internet website $ docker tag ubuntu localhost:5000/ubuntu #### tag the image with tag localhost:5000/ubuntu $ docker push localhost:5000/ubuntu #### push the image to localhost:5000

#for i in `cat fl`; do sudo docker load <deployment/NPV-ATE-890/images/${i}-CMM21.0.0_B1_C1798.tar.gz; sudo docker tag ${i}:CMM21.0.0_B1_C1649 192.168.26.10:5000/$i:CMM21.0.0_B1_C1798 sudo docker push 192.168.26.10:5000/$i:CMM21.0.0_B1_C1649 ### push the image with version to 192.168.26.10

check if there’s any image in the registry container

/v2/_catalog

curl -X GET http://172.24.17.100:5000/v2/_catalog {“repositories”:[“cmm-aimcpps”,”cmm-aimctcs”,”cmm-aimdbs”,”cmm-aimeems”,”cmm-aimfds”,”cmm-aimipds”,”cmm-aimipps”,”cmm-aimlihs”,”cmm-aimpaps”,”cmm-aimsfds”,”cmm-aimufds”,”cmm-alarm”,”cmm-alms”,”cmm-init”,”cmm-loglocal”,”cmm-lxprofile”,”cmm-mmemon”,”cmm-operator”,”cmm-platservices”,”cmm-redis”,”ipps”,”necc”,”paps”]}

/v2/<imagename>/tags/list

ubuntu@lm886-Master:~$ curl -X GET http://172.24.17.100:5000/v2/cmm-aimcpps/tags/list {“name”:”cmm-aimcpps”,”tags”:[“CMM22.0.0_B8000_C4”,”CMM22.0.0_B1_C1171”]}

kubernet schedule/deploy pods details

pods controlled by which

ubuntu@node3:~$ kubectl describe pod coredns-f9fd979d6-7mf54 -n kube-system |grep -i control Controlled By: ReplicaSet/coredns-f9fd979d6o

ubuntu@node3:~$ kubectl describe pod kube-multus-ds-amd64-2vs29 -n kube-system |grep -i control controller-revision-hash=546d959cf Controlled By: DaemonSet/kube-multus-ds-amd64

ubuntu@node3:~$ kubectl describe pod cmm-qa45g-ipds-0 -n cmm-cd |grep -i control controller-revision-hash=cmm-qa45g-ipds-5d8dcb489 Controlled By: StatefulSet/cmm-qa45g-ipds

pods scheduled on the specific node using Node-Selectors

ubuntu@node3:~$ kubectl describe pod coredns-f9fd979d6-7mf54 -n kube-system |grep -i node-sel Node-Selectors: kubernetes.io/os=linux ubuntu@node3:~$ kubectl get nodes –show-labels NAME STATUS ROLES AGE VERSION LABELS node10 Ready <none> 7d21h v1.19.4 kubernetes.io/arch=amd64,kubernetes.io/hostname=node10,kubernetes.io/os=linux,nodetype=worker,region=region1 node14 Ready <none> 7d21h v1.19.4 kubernetes.io/arch=amd64,kubernetes.io/hostname=node14,kubernetes.io/os=linux,nodetype=worker,region=region1 node3 Ready master 7d21h v1.19.4 kubernetes.io/arch=amd64,,kubernetes.io/hostname=node3,kubernetes.io/os=linux,node-role.kubernetes.io/master=,nodetype=master

three kinds of pods controller

Deployment

Deployment is a resource to deploy a stateless application, if using a PVC, all replicas will be using the same Volume and none of it will have its own state. The backing storage obviously must have ReadWriteMany or ReadOnlyMany accessMode if you have more than one replica pod. picture of deployment

normal deployment equal to replicaset

ubuntu@node3:~$ kubectl get ReplicaSet -A NAMESPACE NAME DESIRED CURRENT READY AGE cmm-cd cmm-operator-855b68d5db 1 1 1 124m kube-system calico-kube-controllers-69f49c8d66 1 1 1 7d21h kube-system consul-server-795879879c 1 1 1 7d21h kube-system coredns-f9fd979d6 2 2 2 7d21h kube-system metrics-server-76d6dcf9d5 1 1 1 7d21h local-path-storage local-path-provisioner-7bf96f54f5 1 1 1 7d21h ubuntu@node3:~$ kubectl get deployment -A NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE cmm-cd cmm-operator 1/1 1 1 124m kube-system calico-kube-controllers 1/1 1 1 7d21h kube-system consul-server 1/1 1 1 7d21h kube-system coredns 2/2 2 2 7d21h kube-system metrics-server 1/1 1 1 7d21h local-path-storage local-path-provisioner 1/1 1 1 7d21h

StatefulSet

Statefulsets is used for Stateful applications, each replica of the pod will have its own state, and will be using its own Volume. Persistence in StatefulSets with Replicas StatefulSet is useful for running things in cluster e.g Hadoop cluster, MySQL cluster, where each node has its own storage. picture of statefulsets

pods controlled by which sort ot k8s deployments?

ubuntu@node3:~$ kubectl get pods -A|grep ipds cmm-cd cmm-qa45g-ipds-0 5/5 Running 0 131m cmm-cd cmm-qa45g-ipds-1 5/5 Running 0 131m

ubuntu@node3:~$ kubectl describe pod cmm-qa45g-ipds-0 -n cmm-cd |grep -i control controller-revision-hash=cmm-qa45g-ipds-5d8dcb489 Controlled By: StatefulSet/cmm-qa45g-ipds

DaemonSet

ubuntu@node3:~$ kubectl describe pod kube-multus-ds-amd64-2vs29 -n kube-system |grep -i control controller-revision-hash=546d959cf Controlled By: DaemonSet/kube-multus-ds-amd64

multus will de deployed on every node in the cluster including master. vagrant@master:~$ kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

coredns-558bd4d5db-gz7bl 1/1 Running 2 22h 10.244.166.132 node1 <none> <none> etcd-master 1/1 Running 2 22h 192.168.26.10 master <none> <none> kube-apiserver-master 1/1 Running 2 22h 192.168.26.10 master <none> <none> kube-controller-manager-master 1/1 Running 26 22h 192.168.26.10 master <none> <none> kube-multus-ds-amd64-5kthn 1/1 Running 0 129m 192.168.26.10 master <none> <none> kube-multus-ds-amd64-942ns 1/1 Running 0 129m 192.168.26.15 node5 <none> <none> kube-multus-ds-amd64-ct7t9 1/1 Running 0 129m 192.168.26.12 node2 <none> <none> kube-multus-ds-amd64-r5n24 1/1 Running 0 129m 192.168.26.11 node1 <none> <none> kube-multus-ds-amd64-wb8j7 1/1 Running 0 129m 192.168.26.13 node3 <none> <none> kube-multus-ds-amd64-ww9j6 1/1 Running 0 129m 192.168.26.14 node4 <none> <none>

vagrant@master:~$ kubectl describe pod kube-multus-ds-amd64-ww9j6 -n kube-system |grep -i control controller-revision-hash=84b96cb86f Controlled By: DaemonSet/kube-multus-ds-amd64 vagrant@master:~$ kubectl describe pod kube-multus-ds-amd64-ww9j6 -n kube-system |grep -i node Node: node4/192.168.26.14 tier=node Node-Selectors: kubernetes.io/arch=amd64 node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists

picture of daemonset DaemonSet is a controller similar to ReplicaSet that ensures that the pod runs on all the nodes of the cluster. If a node is added/removed from a cluster, DaemonSet automatically adds/deletes the pod

pods delete

delete one pod

kubectl delete pod <pod-name> -n <name-space> when you delete a pod controlled by above replicasets, it will restart the pod automatically.

delete all pods in replica sets (depoyment, statefulset)

ubuntu@lm890-Master:~$ kubectl describe sts ate-qa-ipds -n npv-ate -n npv-ate |grep -i replica Replicas: 2 desired | 2 total

kubectl scale sts ate-qa-necc -n npv-ate –replicas=0

delete all pods in a daemonset

ubuntu@lm890-Master:~$ kubectl describe DaemonSets kube-multus-ds-amd64 -n kube-system |grep -i replica ###no replica here kubectl scale DaemonSets kube-multus-ds-amd64 -n kube-sytem –replicas=0 will not working

#kubectl rollout restart DaemonSets kube-multus-ds-amd64 -n kube-system this will restart all the pods in DaemonSets kube-multus-ds-amd6

Vagrant

Vagrant will start up a virtual machine(libvirt or virtualbox) in the host using configuration Vagrantfile libvirt is very limited compare to virtualbox. based on my experiment, the public_network and vagrant package not working for libvirt

Vagarantfile definition

vagrant box

config.vm.box = “ubuntu/xenial64”

define hostname and domain name

ENV[‘VAGRANT_DEFAULT_PROVIDER’] = ‘virtualbox’ Vagrant.configure(“2”) do |config| config.vm.define “ubuntu-01” do |config| config.vm.hostname = “ubuntu-01” config.vm.box = “generic/ubuntu1804

shell provision

config.vm.provision “shell”, inline: $script

vagrant provision

this command will execute the vm.provison “shell” in the already up vm.

other provisions

reload is a vagrant plugin ##vagrant plugin install vagrant-reload subconfig.vm.provision :reload ## this will execute the reload plugin

vagrant network definition

#config.vm.network “forwarded_port”, guest: 80, host: 8080 ### expose guest 80 port on host port 8080 which run vagrant up #config.vm.network “private_network”, ip: “192.168.33.10” #config.vm.network “public_network”

vagran box add/list/remove

vagrant box add –provider libvirt generic/ubuntu1804

vagrant up

this command will up the box with the Vagrantfile defined in current directory vagrant up –provider libvirt cat Vagrantfile


Vagrant.configure(“2”) do |config| config.vm.box = “ubuntu/xenial64” config.vm.provider “virtualbox” do |vb| vb.memory = “4096” end

config.vm.provision “shell”, inline: <<-SHELL apt-get update apt-get dist-upgrade -y shutdown -r now SHELL end


$script = <<-SCRIPT echo I am provisioning TTTT… date > /etc/vagrant_provisioned_at SCRIPT Vagrant.configure(“2”) do |config| config.vm.box = “generic/ubuntu1804” config.vm.box_check_update = false config.vm.provision “shell”, inline: $script end end


vagrant destroy

vagrant destroy ### this will detroy the vm started by vagrant up executed in the same directory

vagrant env clean up

some times, vagrant will mess up, you need to remove box vagrant destroy ### this will detroy the vm started by vagrant up executed in the same directory vagrant box remove <boxname> rm ./.vagrant/* rm ~/.vagrant.d/data [root@allinone ubuntu]# ls ~/.vagrant.d/ boxes bundler data gems insecure_private_key plugins.json rgloader setup_version tmp

clean vagrant remaining shell script:


vagrant destroy rm -rf ~/.vagrant.d/data/ rm -rf ~/.vagrant.d/rgloader/ rm -rf ~/.vagrant.d/tmp/ rm -rf .vagrant


vagrant provision

in the curretn vagrant up vm, it wil execute the vm.provision field .vm.provision “shell”

vagrant package

save the current up vm, this will only effect in virtualbox not in libvirt vagrant package –output box_name.box –base “vm machine name” –vagrantfile Vagrantfile

you could add the box_name.box using vagrant box add –name “<name of box>” file:///<path of the box_name.box> then vagrant up this <name of box> will get the same vm when packaged

vagrant@master:~$ kubectl api-resources|grep cni network-attachment-definitions net-attach-def k8s.cni.cncf.io/v1 true NetworkAttachmentDefinition ippools whereabouts.cni.cncf.io/v1alpha1 true IPPool vagrant@master:~$ kubectl get NetworkAttachmentDefinition -A error: the server doesn’t have a resource type “NetworkAttachmentDefinition” vagrant@master:~$ kubectl get network-attachment-definitions -A NAMESPACE NAME AGE npv-ate external-ipds-cmmeth2-npv-ate 21m npv-ate external-ipps-cmmeth2-npv-ate 21m npv-ate external-lihs-cmmeth2-npv-ate 21m npv-ate external-paps-cmmeth2-npv-ate 21m npv-ate ipvlan-nw-alms-ip-alms-npv-ate 21m npv-ate ipvlan-nw-oam-ip-necc-npv-ate 21m vagrant@master:~$

linux virtual network devices

VETH

calico is the default k8s-pod-network

IPVLAN

kubectl services

Types of Kubernetes Service: Kubernetes Services allows external connection to the internal cluster Pods and also manages internal communication among the Pods within the cluster via different Services type in defined int the ServiceSpec. These service types are ClusterIP (Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType) NodePort Exposes the service on each Node¿s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You¿ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>. LoadBalancer

if you want to expose the clusterip , use port-foward

ousside the k8s nodes

kubectl port-forward service/kube-dns 9200 outside the k8s nodes nc <k8s-masterip> 9200

install virtualbox

export KERN_DIR=/usr/src/kernels/`uname -r` Step 4 ¿ Install Oracle VirtualBox and Setup Use the following command to install VirtualBox using the yum command line tool. It will install the latest version of VirtualBox 5.2 on your system.

yum install VirtualBox-6.0 After installation, we need to rebuild kernel modules using the following command.

service vboxdrv setup

There were problems setting up VirtualBox. To re-start the set-up process, run /sbin/vboxconfig

CustomResourceDefinition and its usage

kubectl create secret docker-registry docker –docker-server=docker.io –docker-username=mqyyy777 –docker-email=mqyyy777@163.com –docker-password= -n <namespace> since we create a secret named docker, we can pull image with this secret

imagePullSecrets:

  • name: docker

create CustomeResourceDefinition(CRD)

example_crd.yaml:


apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata:

name: crontabs.stable.example.com spec:

group: stable.example.com

versions:

  • name: v1

    served: true

    storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer

    imagePullSecrets: description: ImagePullSecrets is an array of references to container registry pull secrets to use. These are applied to all images to be pulled. items: properties: name: type: string type: object type: array

scope: Namespaced names:

plural: crontabs

singular: crontab

kind: CronTab

shortNames:

  • ct

kubectl apply -f example_crd.yaml

you can get crd definition from kubetl get CustomeResourceDefinition CronTab -o yaml

create custom objects

If you save the following YAML to my-crontab.yaml: ================================== apiVersion: “stable.example.com/v1” kind: CronTab metadata: name: my-new-cron-object spec: cronSpec: “* * * * */5” image: my-awesome-cron-image imagePullSecrets:

  • name: docker

============================ kubectl apply -f my-crontab.yaml You can then manage your CronTab objects using kubectl. For example:

kubectl get crontab Should print a list like this:

NAME AGE my-new-cron-object 6s Resource names are not case-sensitive when using kubectl, and you can use either the singular or plural forms defined in the CRD, as well as any short names.

You can also view the raw YAML data:

kubectl get ct -o yaml

delete the crd

kubectl delete -f resourcedefinition.yaml kubectl get crontabs