$ ./autotune_minikube_demo_setup.sh W0229 12:23:29.645049 26212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\muppana\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified. ####################################### # Autotune Demo Setup # ####################################### ####################################### 1. Cloning autotune git repos done ####################################### ####################################### 2. Deleting minikube cluster, if any W0229 12:23:34.119956 13652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\muppana\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified. * Deleting "minikube" in docker ... * Deleting container "minikube" ... * Removing C:\Users\muppana\.minikube\machines\minikube ... * Removed all traces of the "minikube" cluster. 3. Starting new minikube cluster W0229 12:23:49.300670 1872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\muppana\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified. * minikube v1.32.0 on Microsoft Windows 11 Enterprise 10.0.22621.3155 Build 22621.3155 * Automatically selected the docker driver * Using Docker Desktop driver with root privileges * Starting control plane node minikube in cluster minikube * Pulling base image ... * Creating docker container (CPUs=8, Memory=7128MB) ... * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ... - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Configuring bridge CNI (Container Networking Interface) ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default Waiting for cluster to be up...done ####################################### ####################################### 4. Installing Prometheus and Grafana Info: installing prometheus... Info: Checking pre requisites for prometheus... No resources found in monitoring namespace. Info: Downloading cadvisor git Info: Installing cadvisor namespace/cadvisor created serviceaccount/cadvisor created daemonset.apps/cadvisor created Info: Downloading prometheus git release - v0.8.0 Info: Installing prometheus namespace/monitoring created customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created clusterrole.rbac.authorization.k8s.io/prometheus-operator created clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created deployment.apps/prometheus-operator created service/prometheus-operator created serviceaccount/prometheus-operator created alertmanager.monitoring.coreos.com/main created poddisruptionbudget.policy/alertmanager-main created prometheusrule.monitoring.coreos.com/alertmanager-main-rules created secret/alertmanager-main created service/alertmanager-main created serviceaccount/alertmanager-main created servicemonitor.monitoring.coreos.com/alertmanager created clusterrole.rbac.authorization.k8s.io/blackbox-exporter created clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created configmap/blackbox-exporter-configuration created deployment.apps/blackbox-exporter created service/blackbox-exporter created serviceaccount/blackbox-exporter created servicemonitor.monitoring.coreos.com/blackbox-exporter created secret/grafana-datasources created configmap/grafana-dashboard-apiserver created configmap/grafana-dashboard-cluster-total created configmap/grafana-dashboard-controller-manager created configmap/grafana-dashboard-k8s-resources-cluster created configmap/grafana-dashboard-k8s-resources-namespace created configmap/grafana-dashboard-k8s-resources-node created configmap/grafana-dashboard-k8s-resources-pod created configmap/grafana-dashboard-k8s-resources-workload created configmap/grafana-dashboard-k8s-resources-workloads-namespace created configmap/grafana-dashboard-kubelet created configmap/grafana-dashboard-namespace-by-pod created configmap/grafana-dashboard-namespace-by-workload created configmap/grafana-dashboard-node-cluster-rsrc-use created configmap/grafana-dashboard-node-rsrc-use created configmap/grafana-dashboard-nodes created configmap/grafana-dashboard-persistentvolumesusage created configmap/grafana-dashboard-pod-total created configmap/grafana-dashboard-prometheus-remote-write created configmap/grafana-dashboard-prometheus created configmap/grafana-dashboard-proxy created configmap/grafana-dashboard-scheduler created configmap/grafana-dashboard-statefulset created configmap/grafana-dashboard-workload-total created configmap/grafana-dashboards created Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead deployment.apps/grafana created service/grafana created serviceaccount/grafana created servicemonitor.monitoring.coreos.com/grafana created prometheusrule.monitoring.coreos.com/kube-prometheus-rules created clusterrole.rbac.authorization.k8s.io/kube-state-metrics created clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created deployment.apps/kube-state-metrics created prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created service/kube-state-metrics created serviceaccount/kube-state-metrics created servicemonitor.monitoring.coreos.com/kube-state-metrics created prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created servicemonitor.monitoring.coreos.com/kube-apiserver created servicemonitor.monitoring.coreos.com/coredns created servicemonitor.monitoring.coreos.com/kube-controller-manager created servicemonitor.monitoring.coreos.com/kube-scheduler created servicemonitor.monitoring.coreos.com/kubelet created clusterrole.rbac.authorization.k8s.io/node-exporter created clusterrolebinding.rbac.authorization.k8s.io/node-exporter created daemonset.apps/node-exporter created prometheusrule.monitoring.coreos.com/node-exporter-rules created service/node-exporter created serviceaccount/node-exporter created servicemonitor.monitoring.coreos.com/node-exporter created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created clusterrole.rbac.authorization.k8s.io/prometheus-adapter created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created configmap/adapter-config created deployment.apps/prometheus-adapter created rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created service/prometheus-adapter created serviceaccount/prometheus-adapter created servicemonitor.monitoring.coreos.com/prometheus-adapter created clusterrole.rbac.authorization.k8s.io/prometheus-k8s created clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created prometheusrule.monitoring.coreos.com/prometheus-operator-rules created servicemonitor.monitoring.coreos.com/prometheus-operator created poddisruptionbudget.policy/prometheus-k8s created prometheus.monitoring.coreos.com/k8s created prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s-config created role.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s created service/prometheus-k8s created serviceaccount/prometheus-k8s created servicemonitor.monitoring.coreos.com/prometheus-k8s created Info: Waiting for all Prometheus Pods to get spawned.......done Info: Waiting for prometheus-k8s-1 to come up..... prometheus-k8s-1 0/2 ContainerCreating 0 3s prometheus-k8s-1 0/2 ContainerCreating 0 8s prometheus-k8s-1 0/2 ContainerCreating 0 13s prometheus-k8s-1 0/2 ContainerCreating 0 18s prometheus-k8s-1 0/2 ContainerCreating 0 23s prometheus-k8s-1 0/2 ContainerCreating 0 29s prometheus-k8s-1 0/2 ContainerCreating 0 34s prometheus-k8s-1 0/2 ContainerCreating 0 40s prometheus-k8s-1 0/2 ContainerCreating 0 45s prometheus-k8s-1 0/2 ContainerCreating 0 50s prometheus-k8s-1 0/2 ContainerCreating 0 55s prometheus-k8s-1 0/2 ContainerCreating 0 60s prometheus-k8s-1 0/2 ContainerCreating 0 65s prometheus-k8s-1 0/2 ContainerCreating 0 70s prometheus-k8s-1 0/2 ContainerCreating 0 75s prometheus-k8s-1 0/2 ContainerCreating 0 81s prometheus-k8s-1 0/2 ContainerCreating 0 86s prometheus-k8s-1 0/2 ContainerCreating 0 91s prometheus-k8s-1 0/2 ContainerCreating 0 96s prometheus-k8s-1 0/2 ContainerCreating 0 102s prometheus-k8s-1 0/2 ContainerCreating 0 107s prometheus-k8s-1 0/2 ContainerCreating 0 112s prometheus-k8s-1 0/2 ContainerCreating 0 117s prometheus-k8s-1 0/2 ContainerCreating 0 2m2s prometheus-k8s-1 0/2 ContainerCreating 0 2m8s prometheus-k8s-1 0/2 ContainerCreating 0 2m12s prometheus-k8s-1 0/2 ContainerCreating 0 2m17s prometheus-k8s-1 0/2 ContainerCreating 0 2m22s prometheus-k8s-1 0/2 ContainerCreating 0 2m27s prometheus-k8s-1 0/2 ContainerCreating 0 2m32s prometheus-k8s-1 0/2 ContainerCreating 0 2m37s prometheus-k8s-1 0/2 ContainerCreating 0 2m42s prometheus-k8s-1 0/2 ContainerCreating 0 2m47s prometheus-k8s-1 0/2 ContainerCreating 0 2m52s prometheus-k8s-1 0/2 ContainerCreating 0 2m57s prometheus-k8s-1 0/2 ContainerCreating 0 3m2s prometheus-k8s-1 0/2 ContainerCreating 0 3m7s prometheus-k8s-1 0/2 ContainerCreating 0 3m12s prometheus-k8s-1 0/2 ContainerCreating 0 3m17s prometheus-k8s-1 0/2 ContainerCreating 0 3m23s prometheus-k8s-1 0/2 ContainerCreating 0 3m28s prometheus-k8s-1 0/2 ContainerCreating 0 3m33s prometheus-k8s-1 0/2 ContainerCreating 0 3m38s prometheus-k8s-1 0/2 ContainerCreating 0 3m43s prometheus-k8s-1 2/2 Running 1 (62s ago) 3m48s Info: prometheus-k8s-1 deploy succeeded: Running prometheus-k8s-1 2/2 Running 1 (62s ago) 3m48s Waiting 30 seconds for Prometheus to get initialized...done ####################################### ####################################### 5. Installing TechEmpower (Quarkus REST EASY) benchmark into cluster deployment.apps/tfb-database created service/tfb-database created deployment.apps/tfb-qrh-sample created service/tfb-qrh-service created servicemonitor.monitoring.coreos.com/tfb-qrh created ####################################### ####################################### 6. Installing Autotune Already on 'mvp_demo' M manifests/crc/BYODB-installation/minikube/kruize-crc-minikube.yaml M manifests/crc/default-db-included-installation/openshift/kruize-crc-openshift.yaml Your branch is up to date with 'origin/mvp_demo'. ### Removing autotune for minikube Removing Performance Profile Removing autotune Removing autotune service account Removing autotune role Removing autotune rolebinding Removing autotune serviceMonitor Removing AutotuneConfig objects Removing AutotuneQueryVariable objects Removing Autotune configmap Removing Autotune CRD Removing AutotuneConfig CRD Removing AutotuneQueryVariables CRD Starting install with ./deploy.sh -c minikube -i docker.io/kruize/autotune_operator:0.0.20.2_mvp ### Installing autotune for minikube Info: Checking pre requisites for minikube... Prometheus is installed and running. Create autotune namespace monitoring Error from server (AlreadyExists): namespaces "monitoring" already exists Info: One time setup - Create a service account to deploy autotune serviceaccount/autotune-sa created customresourcedefinition.apiextensions.k8s.io/autotunes.recommender.com created customresourcedefinition.apiextensions.k8s.io/autotuneconfigs.recommender.com created customresourcedefinition.apiextensions.k8s.io/autotunequeryvariables.recommender.com created customresourcedefinition.apiextensions.k8s.io/kruizeperformanceprofiles.recommender.com created clusterrole.rbac.authorization.k8s.io/autotune-cr created clusterrolebinding.rbac.authorization.k8s.io/autotune-crb created clusterrolebinding.rbac.authorization.k8s.io/autotune-prometheus-crb created clusterrolebinding.rbac.authorization.k8s.io/autotune-docker-crb created clusterrolebinding.rbac.authorization.k8s.io/autotune-scc-crb created autotunequeryvariable.recommender.com/minikube created servicemonitor.monitoring.coreos.com/autotune created prometheus.monitoring.coreos.com/prometheus created Creating environment variable in minikube cluster using configMap configmap/autotune-config created Deploying AutotuneConfig objects kruizelayer.recommender.com/container created kruizelayer.recommender.com/hotspot created kruizelayer.recommender.com/openj9 created kruizelayer.recommender.com/quarkus created Deploying Performance Profile objects kruizeperformanceprofile.recommender.com/resource-optimization-openshift created Info: Deploying autotune yaml to minikube cluster deployment.apps/autotune created service/autotune created Info: Waiting for autotune to come up..... autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 5s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 10s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 16s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 21s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 27s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 34s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 39s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 45s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 52s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 58s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 64s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 70s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 75s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 81s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 86s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 91s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 96s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 102s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 107s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 113s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 118s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 2m4s autotune-f44b6c8d4-nf555 0/2 ContainerCreating 0 2m9s autotune-f44b6c8d4-nf555 2/2 Running 0 2m15s Info: autotune deploy succeeded: Running autotune-f44b6c8d4-nf555 2/2 Running 0 2m16s W0229 12:32:47.198351 28756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\muppana\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified. Info: Access Autotune at http://192.168.49.2:32668/listKruizeTunables Waiting 30 seconds for Autotune to sync with Prometheus...done ####################################### ####################################### 7. Installing Autotune Object for techempower app autotune.recommender.com/quarkus-resteasy-autotune-min-http-response-time-db created ####################################### NAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 0 7m36s alertmanager-main-1 2/2 Running 0 7m36s alertmanager-main-2 2/2 Running 0 7m36s autotune-f44b6c8d4-nf555 1/2 Error 2 (29s ago) 2m51s blackbox-exporter-57b4fdcf9f-45bwq 3/3 Running 0 8m1s grafana-577c488c75-9wrms 1/1 Running 0 8m kube-state-metrics-8658bf8fb5-pmpnn 3/3 Running 0 7m59s node-exporter-fftnx 2/2 Running 0 7m59s prometheus-adapter-5795d646d6-4bphp 1/1 Running 0 7m58s prometheus-adapter-5795d646d6-64s27 1/1 Running 0 7m58s prometheus-k8s-0 2/2 Running 1 (4m52s ago) 7m35s prometheus-k8s-1 2/2 Running 1 (4m49s ago) 7m35s prometheus-operator-68dc896d5d-49nds 2/2 Running 0 8m6s W0229 12:33:22.213242 28860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\muppana\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified. ####################################### # Quarkus App # ####################################### Info: Access techempower app at http://192.168.49.2:31623/db Info: Access techempower app metrics at http://192.168.49.2:31623/q/metrics ####################################### # Autotune # ####################################### Info: Access Autotune tunables at http://192.168.49.2:32668/listAutotuneTunables ###### The following links are meaningful only after an autotune object is deployed ###### Info: Autotune is monitoring these apps http://192.168.49.2:32668/listStacks Info: List Layers in apps that Autotune is monitoring http://192.168.49.2:32668/listStackLayers Info: List Tunables in apps that Autotune is monitoring http://192.168.49.2:32668/listStackTunables Info: Autotune searchSpace at http://192.168.49.2:32668/searchSpace Info: Autotune Experiments at http://192.168.49.2:32668/listExperiments Info: Autotune Experiments Summary at http://192.168.49.2:32668/experimentsSummary Info: Autotune Trials Status at http://192.168.49.2:32668/listTrialStatus Info: Autotune Trials Status at http://192.168.49.2:32668/listTrialStatus?experiment_name=quarkus-resteasy-autotune-min-http-response-time-db&trial_number=0&verbose=true Info: List Layers in autotune http://192.168.49.2:32668/query/listStackLayers?deployment_name=autotune&namespace=monitoring Info: List Layers in tfb http://192.168.49.2:32668/query/listStackLayers?deployment_name=tfb-qrh-sample&namespace=default Info: Access autotune objects using: kubectl -n default get autotune Info: Access autotune tunables using: kubectl -n monitoring get autotuneconfig ####################################### Success! Autotune demo setup took 594 seconds