Skip to content

Performance Testing Guide for Kubearmor

Prashant Mishra edited this page Nov 21, 2023 · 6 revisions

THIS METHODOLOGY IS NOT WELL FUNCTIONED YET AND WILL CHANGE DRASTICALLY

Setting up the environment

  1. A Kuberenetes Cluster with sock-shop deployment installed. (Note, we'll be using a custom sock-shop deployment which has two replicas of the frontend pod)

  2. Apache benchmark from the httpd docker which must be deployed to the cluster.

Config

  • Node: 2-4
  • Platform - AKS
  • Workload -> Sock-shop
  • replica: 2 (For the frontend pod only)
  • Tool -> Apache-bench (request at front-end service)
  • Vm: DS_v2
Vm CPU Ram Data disks Temp Storage
DS2_v2 2 7 GiB 8 14 GiB

P.S: To change the default enforcer to BPFLSM, deploy this in the cluster:

apiVersion: v1
items:
- apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    labels:
      kubearmor-app: updater
    name: updater
    namespace: kubearmor
  spec:
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        kubearmor-app: updater
    template:
      metadata:
        labels:
          kubearmor-app: updater
      spec:
        containers:
        - args:
          - |
            grep "bpf" /rootfs/sys/kernel/security/lsm >/dev/null
            [[ $? -eq 0 ]] && echo "sysfs already has BPF enabled" && sleep infinity
            grep "GRUB_CMDLINE_LINUX.*bpf" /rootfs/etc/default/grub >/dev/null
            [[ $? -eq 0 ]] && echo "grub already has BPF enabled" && sleep infinity
            cat <<EOF >/rootfs/updater.sh
            #!/bin/bash
            lsmlist=\$(cat /sys/kernel/security/lsm)
            echo "current lsmlist=\$lsmlist"
            sed -i "s/^GRUB_CMDLINE_LINUX=.*$/GRUB_CMDLINE_LINUX=\"lsm=\$lsmlist,bpf\"/g" /etc/default/grub
            command -v grub2-mkconfig >/dev/null 2>&1 && grub2-mkconfig -o /boot/grub2.cfg
            command -v grub-mkconfig >/dev/null 2>&1 && grub-mkconfig -o /boot/grub.cfg
            command -v aa-status >/dev/null 2>&1 || yum install apparmor-utils -y
            command -v update-grub >/dev/null 2>&1 && update-grub
            command -v update-grub2 >/dev/null 2>&1 && update-grub2
            reboot
            EOF
            cat /rootfs/updater.sh
            chmod +x /rootfs/updater.sh
            chroot /rootfs/ /bin/bash /updater.sh
          image: debian
          command:
            - "bash"
            - "-c"
          imagePullPolicy: Always
          name: updater
          resources: {}
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /rootfs
            mountPropagation: HostToContainer
            name: rootfs
            readOnly: false
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
        hostPID: true
        nodeSelector:
          kubernetes.io/os: linux
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
        tolerations:
        - operator: Exists
        volumes:
        - hostPath:
            path: /
            type: DirectoryOrCreate
          name: rootfs
    updateStrategy:
      rollingUpdate:
        maxSurge: 0
        maxUnavailable: 1
      type: RollingUpdate
  status:
    currentNumberScheduled: 4
    desiredNumberScheduled: 4
    numberAvailable: 4
    numberMisscheduled: 0
    numberReady: 4
    observedGeneration: 1
    updatedNumberScheduled: 4
kind: List
metadata:
  resourceVersion: ""
  1. Change replicas:2 for the front-end deployment in the sock-shop demo, as given:

image

  1. Apply labels to both the nodes, for ex: kubectl label nodes <your-node-name> nodetype=node1

  2. Use this yaml file to deploy httpd to the cluster:

NOTE: Make sure it is applied to the node where the frontend pods are NOT running, we need this to be on a node different from the frontend svc

apiVersion: v1
kind: Pod
metadata:
  name: httpd
  labels:
    env: prod
spec:
  containers:
  - name: httpd
    image: httpd
    imagePullPolicy: IfNotPresent
  nodeSelector:
    nodetype: node1

Running the benchmarks

This is the table we need:

Scenario  Requests Concurrent Requests Kubearmor CPU (m) Kubearmor Memory (Mi) Throughput (req/s) Average time per req. (ms) # Failed requests Micro-service CPU (m) Micro-service Memory (Mi)
no kubearmor 50000 5000 - - 2205.502 0.4534 0 401.1 287.3333333

AND atleast 10 data for it, including the average of all of them.

First, get the service IP of the frontend pod using kubectl get svc command.

I have made two scripts, which kind of semi automates the process.

  • ApacheBench.sh script which starts the benchmark and outputs only the important parts (does not save to csv file yet):
#!/bin/bash

apache() {
# Define the Kubernetes pod and container information
pod_name="httpd"

kubectl exec -it "$pod_name" -- bash -c "ab -r -c 5000 -n 50000 {K8s Service IP}" > ab_output.txt | tee ab_output.txt

failed_requests=$(grep "Failed requests" ab_output.txt | awk '{print $3}')
requests_per_second=$(grep "Requests per second" ab_output.txt | awk '{print $4}')
time_per_request=$(grep "Time per request" ab_output.txt | awk 'NR==2{print $4}')

echo "Requests per second: $requests_per_second"
echo "Time per request: $time_per_request"
echo "Failed requests: $failed_requests"
}

apache
  • While this is running, concurrently run this script to get the average resource usage of both the front-end pods on the node:
#!/bin/bash

output_file="mic.csv"

get_pod_stats() {
  pod_name="$1"
  data=$(kubectl top pod -n sock-shop "$pod_name" | tail -n 1 | tr -s " " | cut -d " " --output-delimiter "," -f2,3)
  echo "$pod_name,$data"
}

#Unused for now
get_highest_cpu_row() {
  sort -t, -k1 -n -r "$output_file" | head -n 1
}

total_cpu=0
total_memory=0
count=0

# Continuously update and display the CSV file with live data
microservices_metrics() {
while true; do
  data1=$(get_pod_stats "front-end-pod-1")
  data2=$(get_pod_stats "front-end-pod-2")

  cpu1=$(echo "$data1" | cut -d ',' -f2 | sed 's/m//') 
  memory1=$(echo "$data1" | cut -d ',' -f3 | sed 's/Mi//') 
  cpu2=$(echo "$data2" | cut -d ',' -f2 | sed 's/m//')
  memory2=$(echo "$data2" | cut -d ',' -f3 | sed 's/Mi//')

  # Calculate the average CPU and memory usage
  average_cpu=$((($cpu1 + $cpu2) / 2))
  average_memory=$((($memory1 + $memory2) / 2))

  echo "$average_cpu,$average_memory" >> "$output_file"

  sleep 1
done
}

microservices_metrics

You'll have to keep a watch on this to see when the usage spikes at the highest, till the benchmark is complete. Also

replace front-end-pod-1 and front-end-pod-2 accordingly.

From these two scripts, you'll get all the data EXCEPT Kubearmor usage data to fill in the table mentioned above. As for checking Kubearmor data, you'll have to run this bash command concurrently as well:

watch -n 1 -d  'echo "`date +%H:%M:%S`,`kubectl top pods -n kubearmor --sort-by=memory -l kubearmor-app=kubearmor | tail -n 1 | tr -s " " | cut -d " " --output-delimiter "," -f2,3`" | tee -a perf.csv'```

Clone this wiki locally