Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K3D_INSTALLATION #3

Open
Jean-Baptiste-Lasselle opened this issue May 17, 2020 · 17 comments
Open

K3D_INSTALLATION #3

Jean-Baptiste-Lasselle opened this issue May 17, 2020 · 17 comments

Comments

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented May 17, 2020

  • k3d requires only one thing before install : docker (or containerd)
  • If you follow k3d default automated installation, you will get a k3d version 1.x
  • I want k3d version 3 latest beta, which currently is the second beta in v3, v3.0.0-beta1
  • Now to install that version f k3d, i'll have to make up a litlle script at least, from which i'll raise up better crafted solution :
    • On a VM that I access using hostname pegausio.io, I install k3d, and create the k3s multi master cluster :
#!/bin/bash
export K3D_VERSION=v3.0.0-beta.1
# darwin, for macos, and will run in bash, on both linux and macos, coz of shebang
export K3D_OS=linux
export K3D_CPU_ARCH=amd64
export K3D_GH_BINARY_RELEASE_DWLD_URI="https://github.com/rancher/k3d/releases/download/${K3D_VERSION}/k3d-${K3D_OS}-${K3D_CPU_ARCH}"

# first, run the isntallation of the latest version so that all helpers bash env are installed

wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=${K3D_VERSION} bash
# the delete installed standalone k3d binary
if [ -f /usr/local/bin/k3d ]; then
  sudo rm /usr/local/bin/k3d
else 
  echo "k3d latest version was not properly installed, prior to beta version"
  exit 2
fi;

curl -L ${K3D_GH_BINARY_RELEASE_DWLD_URI} --output ./k3d



sudo mv k3d /usr/local/bin

k3d version
k3d --version


k3d create cluster jblCluster --masters 3 --workers 4
# k3d delete cluster jblCluster

# this will create a ~/.kube/config file, with the kube configration inside of it, to use with Kubectl
export KUBECONFIG=$(k3d get kubeconfig jblCluster)
ls -allh ${KUBECONFIG}
cat ${KUBECONFIG}
  • After that, from my workstation, I retirieve kubeconfig and deploy kubernetes dashboard :
# retrieve kubeconfig

if [ -d ~/.k3d/ ]; then 
  rm -fr ~/.k3d/
fi;

mkdir -p ~/.k3d/

# make it silent 
# scp -i ~/.ssh/id_rsa -r jbl@pegasusio.io:~/.kube ~/.k3d
scp -r jbl@pegasusio.io:~/.kube ~/.k3d
sed -i "s#0.0.0.0#pegasusio.io#g" ~/.k3d/config
# - deploying k8s official dashboard
export KUBECONFIG=~/.k3d/config
export GITHUB_URL=https://github.com/kubernetes/dashboard/releases
export  VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

# - now create an admin user for dashboard
kubectl apply -f ./k3s/dashboard/kubernetes-dashboard-admin.yaml
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
export DASH_TOKEN=$(kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')|grep 'token:'|awk '{print $2}')
# And lets go
clear
echo ""
echo " Now access your kubernetes dashboard at : "
echo ""
echo "  http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login " 
echo ""
echo "  Login using using this token : "
echo ""
echo "  ${DASH_TOKEN}  "
echo ""
kubectl cluster-info
kubectl proxy
  • content of kubernetes-dashboard-admin.yaml :
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented May 17, 2020

install Flannel, and Metallb load balancer

# - flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

mkdir -p ./k3s/flannel/

curl -L https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml --output ./k3s/flannel/kube-flannel.yml

kubectl apply -f ./k3s/flannel/kube-flannel.yml

# - allow masters to create pods
kubectl taint nodes --all node-role.kubernetes.io/master-

# - Install metal lb
mkdir -p ./k3s/metallb/

curl -L https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml --output ./k3s/metallb/metallb.yaml
kubectl apply -f ./k3s/metallb/metallb.yaml
# Add a configmap to customize/override Metallb configuration file in pods
kubectl apply -f ./k3s/metallb/metallb.configmap.yaml
  • content of metallb.configmap.yaml :
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.0.120-192.168.0.172 # Change the range here
  • content of ./k3s/metallb/metallb.yaml :
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app: metallb
  name: metallb-system
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  labels:
    app: metallb
  name: speaker
  namespace: metallb-system
spec:
  allowPrivilegeEscalation: false
  allowedCapabilities:
  - NET_ADMIN
  - NET_RAW
  - SYS_ADMIN
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  hostPorts:
  - max: 7472
    min: 7472
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: metallb
  name: controller
  namespace: metallb-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: metallb
  name: speaker
  namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: metallb
  name: metallb-system:controller
rules:
- apiGroups:
  - ''
  resources:
  - services
  verbs:
  - get
  - list
  - watch
  - update
- apiGroups:
  - ''
  resources:
  - services/status
  verbs:
  - update
- apiGroups:
  - ''
  resources:
  - events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: metallb
  name: metallb-system:speaker
rules:
- apiGroups:
  - ''
  resources:
  - services
  - endpoints
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ''
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - extensions
  resourceNames:
  - speaker
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app: metallb
  name: config-watcher
  namespace: metallb-system
rules:
- apiGroups:
  - ''
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: metallb
  name: metallb-system:controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:controller
subjects:
- kind: ServiceAccount
  name: controller
  namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: metallb
  name: metallb-system:speaker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:speaker
subjects:
- kind: ServiceAccount
  name: speaker
  namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app: metallb
  name: config-watcher
  namespace: metallb-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: config-watcher
subjects:
- kind: ServiceAccount
  name: controller
- kind: ServiceAccount
  name: speaker
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: metallb
    component: speaker
  name: speaker
  namespace: metallb-system
spec:
  selector:
    matchLabels:
      app: metallb
      component: speaker
  template:
    metadata:
      annotations:
        prometheus.io/port: '7472'
        prometheus.io/scrape: 'true'
      labels:
        app: metallb
        component: speaker
    spec:
      containers:
      - args:
        - --port=7472
        - --config=config
        env:
        - name: METALLB_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: METALLB_HOST
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        image: metallb/speaker:v0.8.1
        imagePullPolicy: IfNotPresent
        name: speaker
        ports:
        - containerPort: 7472
          name: monitoring
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            - SYS_ADMIN
            drop:
            - ALL
          readOnlyRootFilesystem: true
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      serviceAccountName: speaker
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: metallb
    component: controller
  name: controller
  namespace: metallb-system
spec:
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: metallb
      component: controller
  template:
    metadata:
      annotations:
        prometheus.io/port: '7472'
        prometheus.io/scrape: 'true'
      labels:
        app: metallb
        component: controller
    spec:
      containers:
      - args:
        - --port=7472
        - --config=config
        image: metallb/controller:v0.8.1
        imagePullPolicy: IfNotPresent
        name: controller
        ports:
        - containerPort: 7472
          name: monitoring
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all
          readOnlyRootFilesystem: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534
      serviceAccountName: controller
      terminationGracePeriodSeconds: 0
  • fall back to v1.7 stable version of k3d, with a single master cluster and 5 worker nodes :
# wipe  out
k3d delete cluster --all && docker system prune -f --all && docker system prune -f --volumes

sudo rm $(which k3d)

# re-install latest stable
wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash

# launch the cluster 

k3d create cluster jblCluster --api-port 6550 -p 8081:80@master --workers 5

rm -fr ~/.kube 
export KUBECONFIG="$(k3d get-kubeconfig --name='jblCluster')"
  • then retrieve oh your workstaion, the kubeconfig to proceed with dashboard, flannel, and metallb :
rm -fr ~/.k3d
mkdir -p ~/.k3d
scp -i ~/.ssh/id_rsa -r jbl@pegasusio.io:~/.config/k3d/jblCluster/kubeconfig.yaml ~/.k3d
export KUBECONFIG=~/.k3d/kubeconfig.yaml
sed -i "s#localhost#pegasusio.io#g"  ${KUBECONFIG}
kubectl cluster-info

# then dashboard, and flannel then metallb

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented May 17, 2020

Cheese

git clone https://github.com/containous/traefik cheesie/ && cd cheesie/ && git checkout v1.7 && cd ../

sed -i "s#extensions/v1beta1#apps/v1#g" cheesie/examples/k8s/cheese-deployments.yaml

kubectl create namespace cheese 
kubectl apply -f cheesie/examples/k8s/

export K3S_API_SERVER_HOST="$(cat $KUBECONFIG|grep 'server:'|awk -F ':' '{print $3}'| awk -F '/' '{print $3}')"
export K3S_API_SERVER_IP=$(ping -c 1 ${K3S_API_SERVER_HOST} |grep ttl|awk '{print $5}'|awk -F '(' '{print $2}'|awk -F ')' '{print $1}')

echo "Now add this to the end of your /etc/hosts : "
echo ""
echo " ${K3S_API_SERVER_IP}  stilton.minikube wensleydale.minikube cheddar.minikube cheeses.minikube  "
echo ""
echo "And you will access the deployed apps at : "
echo ""
echo "    http://stilton.minikube:8081/  "
echo ""
echo "    http://cheddar.minikube:8081/  "
echo ""
echo "    http://wensleydale.minikube:8081/  "
echo ""
echo "    http://cheeses.minikube:8081/stilton/  "
echo ""
echo "    http://cheeses.minikube:8081/cheddar/  "
echo ""
echo "    http://cheeses.minikube:8081/wensleydale/  "
echo ""
  • And an example output of a few kubectl command to let you see the resulting state (traefik ingress does not yet get its ip address from metal llb, or does it ? 172.28.0.3 it does, but its kubernetes in docker, so we have there is a docker network ipaddress
jbl@poste-devops-jbl-16gbram:~$ kubectl get all 
NAME                               READY   STATUS    RESTARTS   AGE
pod/wensleydale-79f5fc4c5d-qr84x   1/1     Running   0          20m
pod/cheddar-59666cdbc4-rsg8b       1/1     Running   0          20m
pod/stilton-d9485c498-qbnqn        1/1     Running   0          20m
pod/wensleydale-79f5fc4c5d-mm5hq   1/1     Running   0          20m
pod/stilton-d9485c498-jgdvn        1/1     Running   0          20m
pod/cheddar-59666cdbc4-2plvh       1/1     Running   0          20m

NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes    ClusterIP   10.43.0.1      <none>        443/TCP   50m
service/stilton       ClusterIP   10.43.64.255   <none>        80/TCP    20m
service/cheddar       ClusterIP   10.43.238.7    <none>        80/TCP    20m
service/wensleydale   ClusterIP   10.43.13.114   <none>        80/TCP    20m

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wensleydale   2/2     2            2           20m
deployment.apps/stilton       2/2     2            2           20m
deployment.apps/cheddar       2/2     2            2           20m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/wensleydale-79f5fc4c5d   2         2         2       20m
replicaset.apps/stilton-d9485c498        2         2         2       20m
replicaset.apps/cheddar-59666cdbc4       2         2         2       20m
jbl@poste-devops-jbl-16gbram:~$ kubectl get all,ingresses 
NAME                               READY   STATUS    RESTARTS   AGE
pod/wensleydale-79f5fc4c5d-qr84x   1/1     Running   0          20m
pod/cheddar-59666cdbc4-rsg8b       1/1     Running   0          20m
pod/stilton-d9485c498-qbnqn        1/1     Running   0          20m
pod/wensleydale-79f5fc4c5d-mm5hq   1/1     Running   0          20m
pod/stilton-d9485c498-jgdvn        1/1     Running   0          20m
pod/cheddar-59666cdbc4-2plvh       1/1     Running   0          20m

NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes    ClusterIP   10.43.0.1      <none>        443/TCP   50m
service/stilton       ClusterIP   10.43.64.255   <none>        80/TCP    20m
service/cheddar       ClusterIP   10.43.238.7    <none>        80/TCP    20m
service/wensleydale   ClusterIP   10.43.13.114   <none>        80/TCP    20m

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wensleydale   2/2     2            2           20m
deployment.apps/stilton       2/2     2            2           20m
deployment.apps/cheddar       2/2     2            2           20m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/wensleydale-79f5fc4c5d   2         2         2       20m
replicaset.apps/stilton-d9485c498        2         2         2       20m
replicaset.apps/cheddar-59666cdbc4       2         2         2       20m

NAME                                HOSTS                                                    ADDRESS      PORTS   AGE
ingress.extensions/cheese           stilton.minikube,cheddar.minikube,wensleydale.minikube   172.28.0.3   80      20m
ingress.extensions/cheeses          cheeses.minikube                                         172.28.0.3   80      20m
ingress.extensions/cheese-default   *                                                        172.28.0.3   80      20m
jbl@poste-devops-jbl-16gbram:~$ kubectl get all,ingresses,daemonsets --all-namespaces |grep nginx
jbl@poste-devops-jbl-16gbram:~$ kubectl get svc,ingresses,daemonsets --all-namespaces
NAMESPACE              NAME                                TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
default                service/kubernetes                  ClusterIP      10.43.0.1       <none>        443/TCP                       51m
kube-system            service/kube-dns                    ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP        51m
kube-system            service/metrics-server              ClusterIP      10.43.61.252    <none>        443/TCP                       51m
kube-system            service/traefik-prometheus          ClusterIP      10.43.124.99    <none>        9100/TCP                      51m
kube-system            service/traefik                     LoadBalancer   10.43.134.138   172.28.0.3    80:31195/TCP,443:31618/TCP    51m
kubernetes-dashboard   service/kubernetes-dashboard        ClusterIP      10.43.94.202    <none>        443/TCP                       45m
kubernetes-dashboard   service/dashboard-metrics-scraper   ClusterIP      10.43.101.184   <none>        8000/TCP                      45m
default                service/stilton                     ClusterIP      10.43.64.255    <none>        80/TCP                        21m
default                service/cheddar                     ClusterIP      10.43.238.7     <none>        80/TCP                        21m
default                service/wensleydale                 ClusterIP      10.43.13.114    <none>        80/TCP                        21m
kube-system            service/traefik-ingress-service     NodePort       10.43.17.239    <none>        80:32019/TCP,8080:31801/TCP   21m

NAMESPACE   NAME                                HOSTS                                                    ADDRESS      PORTS   AGE
default     ingress.extensions/cheese           stilton.minikube,cheddar.minikube,wensleydale.minikube   172.28.0.3   80      21m
default     ingress.extensions/cheeses          cheeses.minikube                                         172.28.0.3   80      21m
default     ingress.extensions/cheese-default   *                                                        172.28.0.3   80      21m

NAMESPACE        NAME                                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
kube-system      daemonset.apps/svclb-traefik                6         6         6       6            6           <none>                        51m
kube-system      daemonset.apps/kube-flannel-ds-arm64        0         0         0       0            0           <none>                        39m
kube-system      daemonset.apps/kube-flannel-ds-arm          0         0         0       0            0           <none>                        39m
kube-system      daemonset.apps/kube-flannel-ds-ppc64le      0         0         0       0            0           <none>                        39m
kube-system      daemonset.apps/kube-flannel-ds-s390x        0         0         0       0            0           <none>                        39m
kube-system      daemonset.apps/kube-flannel-ds-amd64        6         6         6       6            6           <none>                        39m
metallb-system   daemonset.apps/speaker                      6         6         6       6            6           beta.kubernetes.io/os=linux   36m
kube-system      daemonset.apps/traefik-ingress-controller   6         6         0       6            0           <none>                        21m
jbl@poste-devops-jbl-16gbram:~$ kubectl describe service/traefik -n kube-system 
Name:                     traefik
Namespace:                kube-system
Labels:                   app=traefik
                          chart=traefik-1.81.0
                          heritage=Helm
                          release=traefik
Annotations:              <none>
Selector:                 app=traefik,release=traefik
Type:                     LoadBalancer
IP:                       10.43.134.138
LoadBalancer Ingress:     172.28.0.3
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31195/TCP
Endpoints:                10.42.3.3:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31618/TCP
Endpoints:                10.42.3.3:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
jbl@poste-devops-jbl-16gbram:~$ 

@Jean-Baptiste-Lasselle
Copy link
Author

Corriger pour reprendre l'exact déploiement de thoorium, et voir le résutlat c'est donc du traefik 2.0.1

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented May 17, 2020

Helm Kubernetes Deploy RocketChat

jbl@poste-devops-jbl-16gbram:~$  helm repo add stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories
jbl@poste-devops-jbl-16gbram:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
jbl@poste-devops-jbl-16gbram:~$ helm install pokus stable/rocketchat --set mongodb.mongodbPassword=$(echo -n $(openssl rand -base64 32)),mongodb.mongodbRootPassword=$(echo -n $(openssl rand -base64 32))
NAME: pokus
LAST DEPLOYED: Mon May 18 01:56:06 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rocket.Chat can be accessed via port 80 on the following DNS name from within your cluster:

- http://pokus-rocketchat.default

You can easily connect to the remote instance from your browser. Forward the webserver port to localhost:8888

- kubectl port-forward --namespace default $(kubectl get pods --namespace default -l "app.kubernetes.io/name=rocketchat,app.kubernetes.io/instance=pokus" -o jsonpath='{ .items[0].metadata.name }') 8888:3000

You can also connect to the container running Rocket.Chat. To open a shell session in the pod run the following:

- kubectl exec -i -t --namespace default $(kubectl get pods --namespace default -l "app.kubernetes.io/name=rocketchat,app.kubernetes.io/instance=pokus" -o jsonpath='{.items[0].metadata.name}') /bin/sh

To trail the logs for the Rocket.Chat pod run the following:

- kubectl logs -f --namespace default $(kubectl get pods --namespace default -l "app.kubernetes.io/name=rocketchat,app.kubernetes.io/instance=pokus" -o jsonpath='{ .items[0].metadata.name }')

To expose Rocket.Chat via an Ingress you need to set host and enable ingress.

helm install --set host=chat.yourdomain.com --set ingress.enabled=true stable/rocketchat
  • autre essai, et il ne reste que l'erreur d'authntification à mogodb :
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
helm install pokus stable/rocketchat --set mongodb.mongodbPassword='jbljbl',mongodb.mongodbRootPassword='jbljbl',mongodb.mongodbUsername='jbl'
kubectl logs -f deployment.apps/pokus-rocketchat

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented May 18, 2020

Helm Kubernetes Deploy RocketChat Error

https://docs.rocket.chat/installation/helm-chart/#install-rocket-chat-chart-and-configure-mongodbusername-mongodbpassword-mongodbdatabase-and-mongodbrootpassword

helm repo add stable https://kubernetes-charts.storage.googleapis.com
# ---
# version of the rocketchat docker image you base your deployment on.

export ROCKETCHAT_OCI=rocket.chat:3.2.2

# gives an authentication error of rockatchat server to mongo
helm install --set mongodb.mongodbUsername=rocketchat,mongodb.mongodbPassword=changeme,mongodb.mongodbDatabase=rocketchat,mongodb.mongodbRootPassword=root-changeme,repository=${ROCKETCHAT_OCI} pokus stable/rocketchat

# done une erreur d'encodage de caracèree pour le mot de passe
helm install pokus stable/rocketchat --set mongodb.mongodbUsername=rocketchat,mongodb.mongodbPassword=$(echo -n $(openssl rand -base64 32)),mongodb.mongodbRootPassword=$(echo -n $(openssl rand -base64 32)),repository=${ROCKETCHAT_OCI},mongodbDatabase=rocketchat

sudo apt-get install -y jq

export ENCODED_PWD1=$(jq -nr --arg v "$(jq -nr --arg v "$(echo -n $(openssl rand -base64 32))" '$v|@uri')" '$v|@uri')
export ENCODED_PWD2=$(jq -nr --arg v "$(jq -nr --arg v "$(echo -n $(openssl rand -base64 32))" '$v|@uri')" '$v|@uri')

# ---
# gives bakc again an authentication error of rockatchat server to mongo, now
# that password is properly encoded with jq
# ---
helm install pokus stable/rocketchat --set mongodb.mongodbUsername=rocketchat,mongodb.mongodbPassword=${ENCODED_PWD1},mongodb.mongodbRootPassword=${ENCODED_PWD2},repository=${ROCKETCHAT_OCI},mongodbDatabase=rocketchat

  • auth error :
~$ kubectl logs -f pod/pokus-rocketchat-68955d87b6-95gmf
/app/bundle/programs/server/node_modules/fibers/future.js:313
						throw(ex);
						^

MongoError: Authentication failed.
    at /app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/auth/auth_provider.js:46:25
    at /app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/auth/scram.js:215:18
    at Connection.messageHandler (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connect.js:334:5)
    at Connection.emit (events.js:311:20)
    at processMessage (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:364:10)
    at Socket.<anonymous> (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/connection.js:533:15)
    at Socket.emit (events.js:311:20)
    at addChunk (_stream_readable.js:294:12)
    at readableAddChunk (_stream_readable.js:275:11)
    at Socket.Readable.push (_stream_readable.js:209:10)
    at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {
  name: 'MongoNetworkError',
  errorLabels: [ 'TransientTransactionError' ],
  [Symbol(mongoErrorContextSymbol)]: {}
  • url encoding error :
~$ kubectl logs -f pod/pokus-rocketchat-6cfdd7bff4-rft7c
/app/bundle/programs/server/node_modules/fibers/future.js:280
						throw(ex);
						^

Error: Password contains an illegal unescaped character
    at parseConnectionString (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/url_parser.js:298:13)
    at parseHandler (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/url_parser.js:129:14)
    at module.exports (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/url_parser.js:25:12)
    at connect (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/mongo_client_ops.js:195:3)
    at connectOp (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/operations/mongo_client_ops.js:284:3)
    at executeOperation (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/utils.js:416:24)
    at MongoClient.connect (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/mongo_client.js:175:10)
    at Function.MongoClient.connect (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/mongo_client.js:341:22)
    at new MongoConnection (packages/mongo/mongo_driver.js:172:11)
    at new MongoInternals.RemoteCollectionDriver (packages/mongo/remote_collection_driver.js:4:16)
    at Object.<anonymous> (packages/mongo/remote_collection_driver.js:38:10)
    at Object.defaultRemoteCollectionDriver (packages/underscore.js:784:19)
    at new Collection (packages/mongo/collection.js:97:40)
    at new AccountsCommon (packages/accounts-base/accounts_common.js:23:18)
    at new AccountsServer (packages/accounts-base/accounts_server.js:23:5)
    at packages/accounts-base/server_main.js:7:12

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented May 18, 2020

UI Node CLI WRAPPER FOR PULUMI

  • jumbo success . failure message figlet for nodejs : https://www.sitepoint.com/javascript-command-line-interface-cli-node-js/
  • detailed info with console.log and using a nodejs console emoji package like https://github.com/xxczaki/mija and https://github.com/ardeshireshghi/lazy-console-emojis :
    • instructions to retrieve kubeconfig
    • instructions to deploy n access dashboard
    • instruction to access cheesie apps
    • instruction to access test results report
    • instructions to troubleshoot
  • I do the wrapper to be able to manage tests at an upper level
  • the node typescript program will use npm classic CICD build cycle
  • running the tests will mock the AWS, and cut off the time consuming task in the pulumi up :
    • i wanna be able to --skip-iaas to skip the cloud provider provisioning part, and just test on an existing infra, be it an openstack tenant and vms, or an aws eks cluster.
  • the node app just runs the usual build cylce, and we add a last phase : npm deploy (npm ublish will still publish the npm package, to an npm private registry.

@Jean-Baptiste-Lasselle
Copy link
Author

Network In use for Metallb

$ kubectl describe configmap/config -n metallb-system
Name:         config
Namespace:    metallb-system
Labels:       <none>
Annotations:  
Data
====
config:
----
address-pools:
- name: default
  protocol: layer2
  addresses:
  - 192.168.0.120-192.168.0.172 # Change the range here

Events:  <none>
  • Right, So i'll test what if I add a switch and a router, with a dhcp service, for a 192.168.0.0/24 network , then I try and reach the IP

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Jun 6, 2020

All in all

If you ever install k3d, never, ever install k3s.

k3d stable

  • Install k3d stable, that is, k3d v1.7.x :
# uninstall any k3d previous installation, if any
if [ -f $(which k3d) ]; then
  # wipe  out all pre-existing cluster
  k3d delete cluster --all && docker system prune -f --all && docker system prune -f --volumes
  sudo rm $(which k3d)
fi;

# Install latest stable (`v1.7`)
wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
  • launch a cluster, with a bridge docker network base (for meltallb later) :
k3d create cluster jblCluster --api-port 6550 -p 8081:80@master --workers 5

rm -fr ~/.kube 
export KUBECONFIG="$(k3d get-kubeconfig --name='jblCluster')"

k3d 3.x (beta) (multimaster) WOOOORRRKKKS

What is VERY important, to run multi-master k3d, is that yu have to use a dedicated docker network on the host, of type bridge, and none of the default networks, like none (of course), bridge (the one called bridge, which is of type bridge...), host (foget about that one immediately)

  • Install k3d stable, that is, k3d 3.x (beta) :
#!/bin/bash

# uninstall any k3d previous installation, if any
if [ -f $(which k3d) ]; then
  # wipe  out all pre-existing cluster
  k3d delete cluster --all && docker system prune -f --all && docker system prune -f --volumes
  sudo rm $(which k3d)
fi;

# Install latest stable (`v1.7`)

export K3D_VERSION=v3.0.0-beta.1
# darwin, for macos, and will run in bash, on both linux and macos, coz of shebang
export K3D_OS=linux
export K3D_CPU_ARCH=amd64
export K3D_GH_BINARY_RELEASE_DWLD_URI="https://github.com/rancher/k3d/releases/download/${K3D_VERSION}/k3d-${K3D_OS}-${K3D_CPU_ARCH}"

# first, run the installation of the latest version so that all helpers bash env are installed

wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG=${K3D_VERSION} bash
# the delete installed standalone k3d binary
if [ -f /usr/local/bin/k3d ]; then
  sudo rm /usr/local/bin/k3d
else 
  echo "k3d latest version was not properly installed, prior to beta version"
  exit 2
fi;

curl -L ${K3D_GH_BINARY_RELEASE_DWLD_URI} --output ./k3d



sudo mv k3d /usr/local/bin

sudo chmod a+x /usr/local/bin/k3d

k3d version


docker network create --driver bridge jbl_network

# 
# ---
# multi master mode is really extremely unstable : 
# every time I spawn up a multi master, it always 
# ends up in a failed state, after a few minutes
# ---
# 
# k3d create cluster jblCluster --masters 3 --workers 5 --network jbl_network
k3d create cluster jblCluster --masters 1 --workers 9 --network jbl_network



# this will create a ~/.kube/config file, with the kube configration inside of it, to use with Kubectl
export KUBECONFIG=$(k3d get kubeconfig jblCluster)
ls -allh ${KUBECONFIG}
cat ${KUBECONFIG}
  • Now deploying dashboard with kubectl
# you need your KUBECONFIG


export GITHUB_URL=https://github.com/kubernetes/dashboard/releases
export  VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml

# - now create an admin user for dashboard
kubectl apply -f ./k3s/dashboard/kubernetes-dashboard-admin.yaml
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
export DASH_TOKEN=$(kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')|grep 'token:'|awk '{print $2}')
# And lets go
clear
echo ""
echo " Now access your kubernetes dashboard at : "
echo ""
echo "  http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login " 
echo ""
echo "  Login using using this token : "
echo ""
echo "  ${DASH_TOKEN}  "
echo ""
  • deploy metalllb to the cluster :
# - Install metal lb
mkdir -p ./k3s/metallb/

curl -L https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml --output ./k3s/metallb/metallb.yaml
kubectl apply -f ./k3s/metallb/metallb.yaml
# Add a configmap to customize/override Metallb configuration file in pods
kubectl apply -f ./k3s/metallb/metallb.configmap.yaml

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Jun 6, 2020

RocketChat Back

  • Ok, I have a problem if I use the RocketChat Helm Chart's included MongoDB.
  • Now What I'll do is deploying my own mongodb replicaset, and connect the rocketchat to it.
  • go :
export MONGO_CLIENT_SERVICE_NAME=mongopokus-mongodb-replicaset-client
export MONGO_HELM_RELEASE=mongopokus
export MONGO_NAMESPACE=default
export MONGO_K3S_HOST=${MONGO_CLIENT_SERVICE_NAME}.${MONGO_NAMESPACE}.svc.cluster.local
#  mongopokus-mongodb-replicaset-client.default.svc.cluster.local

export MONGO_PORT_NO=27017
export MONGO_ROCKET_DB=rocketchat

# auth.adminUser
# auth.adminPassword
export MONGO_USER_NAME=rocketchat
export MONGO_USER_PWD=rocketchat

# ---
# deploiement de mongodb
helm install ${MONGO_HELM_RELEASE} stable/mongodb-replicaset --set auth.adminUser=${MONGO_USER_NAME},auth.adminPassword=${MONGO_USER_PWD}

sleep 5s

# ---
# commande de test healthcheck du replicaset "livenessProbe"

for ((i = 0; i < 3; ++i)); do kubectl exec --namespace default ${MONGO_HELM_RELEASE}-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(rs.isMaster())"'; done

# 
# --> il faut, dans le mongo shell, créer l'utilisateur mongodb et la bdd rocketchat
# 
export SCRIPT_MONGO="db.createUser( {  user: \"${MONGO_USER_NAME}\", pwd: \"${MONGO_USER_PWD}\", roles: [ { role: \"userAdminAnyDatabase\", db: \"admin\" } ] } );"

kubectl exec -it service/${MONGO_HELM_RELEASE}-mongodb-replicaset -- mongo --eval "$SCRIPT_MONGO"


# ---
# deploiement de rocketchat
helm install pokus stable/rocketchat --set mongodb.enabled=false,externalMongodbUrl=mongodb://${MONGO_USERNAME}:${MONGO_PASSWD}@${MONGO_K3S_HOST}:${MONGO_PORT_NO}/${MONGO_ROCKET_DB},externalMongodbOplogUrl=mongodb://${MONGO_USERNAME}:${MONGO_PASSWD}@${MONGO_K3S_HOST}:${MONGO_PORT_NO}/local?replicaSet=rs0&authSource=admin


# ERREUR :  password cannot be empty, donc il faut que je définisse ce mot de passe avec d'autres paramètres de configuration 
# TODO: tester ll'auhtnetification avec le user créée, contre la database rocketchat

@Jean-Baptiste-Lasselle
Copy link
Author

Chartmuseum

jbl@poste-devops-jbl-16gbram:~$ helm install pokus-chartmuseum -f custom.yaml stable/chartmuseum
NAME: pokus-chartmuseum
LAST DEPLOYED: Sun Jun  7 01:00:38 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

Get the ChartMuseum URL by running:

  export POD_NAME=$(kubectl get pods --namespace default -l "app=chartmuseum" -l "release=pokus-chartmuseum" -o jsonpath="{.items[0].metadata.name}")
  echo http://127.0.0.1:8780/
  kubectl port-forward $POD_NAME 8780:8080 --namespace default
jbl@poste-devops-jbl-16gbram:~$ export POD_NAME=$(kubectl get pods --namespace default -l "app=chartmuseum" -l "release=pokus-chartmuseum" -o jsonpath="{.items[0].metadata.name}")
jbl@poste-devops-jbl-16gbram:~$ echo http://127.0.0.1:8780/
http://127.0.0.1:8780/
jbl@poste-devops-jbl-16gbram:~$   kubectl port-forward $POD_NAME 8780:8080 --namespace default
Forwarding from 127.0.0.1:8780 -> 8080
Forwarding from [::1]:8780 -> 8080
Handling connection for 8780
^Cjbl@poste-devops-jbl-16gbram:~$ cat custom.yaml
env:
  open:
    STORAGE: local
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  ## Chartmuseum data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Jun 6, 2020

export AIRFLOW_VERSION=v7.1.0
curl -LO https://github.com/helm/charts/raw/master/stable/airflow/examples/minikube/custom-values.yaml
kubectl create namespace airflow
helm install pegasus-airflow stable/airflow \
  --version "${AIRFLOW_VERSION}" \
  --namespace "airflow" \
  --values ./custom-values.yaml

# goes down / KO

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Jun 6, 2020

GRafana

export HELM_GRAFANA_RELEASE=pokus-grafana
helm install ${HELM_GRAFANA_RELEASE} stable/grafana

echo "Here is the username to login into Grafana first time : "
kubectl get secret --namespace default pokus-grafana -o jsonpath="{.data.admin-user}" | base64 --decode ; echo
echo "Here is the password to login into Grafana first time : "
kubectl get secret --namespace default pokus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

echo "http://127.0.0.1:8080/"

kubectl port-forward service/pokus-grafana 8484:80

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Jun 6, 2020

Prometheus

helm install pokus-prometheus stable/prometheus

kubectl port-forward service/pokus-prometheus 8485:80

echo "http://127.0.0.1:8080/"

sonarqube

helm repo add oteemocharts https://oteemo.github.io/charts
helm install oteemocharts/sonarqube

@Jean-Baptiste-Lasselle
Copy link
Author

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Jun 7, 2020

essayer aussi 👍

@Jean-Baptiste-Lasselle
Copy link
Author

Jean-Baptiste-Lasselle commented Apr 4, 2021

Metallb Generation 2

Ok, I learned that Metallb ALyer 2 configuration relies on ARP, and I learned what ARP is. To practice, I am now experimenting playing with ARP between docker networks and docker host networks. I have found a few interesting experiments there :

So the question is : How can a software use ARP ? send ARP broadcast and send ARP responses.
... to be continued

I will have to also play with pure docker networking, to understand how ARP goes through docker networks...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant