❯ python3 reproduce_bugs.py -c rabbitmq-operator -b intermediate-state-1 python3 sieve.py -c examples/rabbitmq-operator/ -m test -p bug_reproduction_test_plans/rabbitmq-operator-intermediate-state-1.yaml -r ghcr.io/sieve-project/action Running Sieve with mode: test... Get test workload resize-pvc from test plan Sieve result dir: sieve_test_results/rabbitmq-operator/resize-pvc/test/rabbitmq-operator-intermediate-state-1 Test plan: sieve_test_results/rabbitmq-operator/resize-pvc/test/rabbitmq-operator-intermediate-state-1/rabbitmq-operator-intermediate-state-1.yaml /Users/jshajigeorge/work/sieve/sieve.py:688: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. test_plan_content = yaml.load(open(test_context.original_test_plan)) Deleting cluster "kind" ... Trying to create kind cluster Creating cluster "kind" ... ✓ Ensuring node image (ghcr.io/sieve-project/action/node:v1.18.9-test) đŸ–ŧ ✓ Preparing nodes đŸ“Ļ ✓ Writing configuration 📜 ✓ Starting control-plane 🕹ī¸ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/ Waiting for apiservers to be ready... Setting up Sieve server... [OK] Sieve server set up configmap/sieve-testing-global-config created Loading image ghcr.io/sieve-project/action/rabbitmq-operator:test to kind nodes... Image: "" with ID "sha256:fbd77eea27f31a596a067e80796b6210fdccfa5caca87488b2d812e0bb569d2c" not yet present on node "kind-control-plane", loading... Deploying controller... Installing csi provisioner... customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created serviceaccount/snapshot-controller created clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created statefulset.apps/snapshot-controller created No resources found No resources found in default namespace. No resources found applying RBAC rules curl https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/v2.2.1/deploy/kubernetes/rbac.yaml --output /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf/rbac.yaml --silent --location kubectl apply --kustomize /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf serviceaccount/csi-provisioner created role.rbac.authorization.k8s.io/external-provisioner-cfg created clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created rolebinding.rbac.authorization.k8s.io/csi-provisioner-role-cfg created clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created curl https://raw.githubusercontent.com/kubernetes-csi/external-attacher/v3.2.1/deploy/kubernetes/rbac.yaml --output /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf/rbac.yaml --silent --location kubectl apply --kustomize /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf serviceaccount/csi-attacher created role.rbac.authorization.k8s.io/external-attacher-cfg created clusterrole.rbac.authorization.k8s.io/external-attacher-runner created rolebinding.rbac.authorization.k8s.io/csi-attacher-role-cfg created clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created curl https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.1.1/deploy/kubernetes/csi-snapshotter/rbac-csi-snapshotter.yaml --output /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf/rbac.yaml --silent --location kubectl apply --kustomize /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf serviceaccount/csi-snapshotter created role.rbac.authorization.k8s.io/external-snapshotter-leaderelection created clusterrole.rbac.authorization.k8s.io/external-snapshotter-runner created rolebinding.rbac.authorization.k8s.io/external-snapshotter-leaderelection created clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-role created curl https://raw.githubusercontent.com/kubernetes-csi/external-resizer/v1.2.0/deploy/kubernetes/rbac.yaml --output /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf/rbac.yaml --silent --location kubectl apply --kustomize /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf serviceaccount/csi-resizer created role.rbac.authorization.k8s.io/external-resizer-cfg created clusterrole.rbac.authorization.k8s.io/external-resizer-runner created rolebinding.rbac.authorization.k8s.io/csi-resizer-role-cfg created clusterrolebinding.rbac.authorization.k8s.io/csi-resizer-role created curl https://raw.githubusercontent.com/kubernetes-csi/external-health-monitor/v0.3.0/deploy/kubernetes/external-health-monitor-controller/rbac.yaml --output /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf/rbac.yaml --silent --location kubectl apply --kustomize /var/folders/ct/5v14ypxs5qg0gs40zq2ml74w0000gp/T/tmp.xHJqeTsf serviceaccount/csi-external-health-monitor-controller created role.rbac.authorization.k8s.io/external-health-monitor-controller-cfg created clusterrole.rbac.authorization.k8s.io/external-health-monitor-controller-runner created rolebinding.rbac.authorization.k8s.io/csi-external-health-monitor-controller-role-cfg created clusterrolebinding.rbac.authorization.k8s.io/csi-external-health-monitor-controller-role created deploying hostpath components /Users/jshajigeorge/work/sieve/sieve_aux/csi-driver/deploy/kubernetes-latest/hostpath/csi-hostpath-driverinfo.yaml csidriver.storage.k8s.io/hostpath.csi.k8s.io created /Users/jshajigeorge/work/sieve/sieve_aux/csi-driver/deploy/kubernetes-latest/hostpath/csi-hostpath-plugin.yaml using image: k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1 using image: k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.3.0 using image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 using image: k8s.gcr.io/sig-storage/livenessprobe:v2.4.0 using image: k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 using image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1 using image: k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 using image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1 serviceaccount/csi-hostpathplugin-sa created clusterrolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-attacher-cluster-role created clusterrolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-health-monitor-controller-cluster-role created clusterrolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-provisioner-cluster-role created clusterrolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-resizer-cluster-role created clusterrolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-snapshotter-cluster-role created rolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-attacher-role created rolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-health-monitor-controller-role created rolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-provisioner-role created rolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-resizer-role created rolebinding.rbac.authorization.k8s.io/csi-hostpathplugin-snapshotter-role created statefulset.apps/csi-hostpathplugin created /Users/jshajigeorge/work/sieve/sieve_aux/csi-driver/deploy/kubernetes-latest/hostpath/csi-hostpath-snapshotclass.yaml error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "STDIN": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1" ensure CRDs are installed first modified version of /Users/jshajigeorge/work/sieve/sieve_aux/csi-driver/deploy/kubernetes-latest/hostpath/csi-hostpath-snapshotclass.yaml: # Usage of the v1 API implies that the cluster must have # external-snapshotter v4.x installed. apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snapclass labels: app.kubernetes.io/instance: hostpath.csi.k8s.io app.kubernetes.io/part-of: csi-driver-host-path app.kubernetes.io/name: csi-hostpath-snapclass app.kubernetes.io/component: volumesnapshotclass driver: hostpath.csi.k8s.io #csi-hostpath deletionPolicy: Delete NAME READY STATUS RESTARTS AGE csi-hostpathplugin-0 0/8 ContainerCreating 0 1s snapshot-controller-0 1/1 Running 0 14s usage: sleep seconds NAME READY STATUS RESTARTS AGE csi-hostpathplugin-0 0/8 ContainerCreating 0 1s snapshot-controller-0 1/1 Running 0 14s storageclass.storage.k8s.io/csi-hostpath-sc created usage: sleep seconds storageclass.storage.k8s.io/standard patched storageclass.storage.k8s.io/csi-hostpath-sc patched NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-hostpath-sc (default) hostpath.csi.k8s.io Delete Immediate true 0s standard rancher.io/local-path Delete WaitForFirstConsumer false 33s + kubectl apply -f cluster-operator.yaml namespace/rabbitmq-system created customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created serviceaccount/rabbitmq-cluster-operator created role.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-role created clusterrole.rbac.authorization.k8s.io/rabbitmq-cluster-operator-role created rolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/rabbitmq-cluster-operator-rolebinding created deployment.apps/rabbitmq-operator created Wait for the controller pod to be ready... [OK] Controller deployed Running test workload... kubectl apply -f examples/rabbitmq-operator/test/rmqc-1.yaml rabbitmqcluster.rabbitmq.com/rabbitmq-cluster created 2023-03-07 09:59:08.987848 wait until pod rabbitmq-cluster-server-0 becomes Running... wait takes 80.619948 seconds 2023-03-07 10:00:29.608024 kubectl patch RabbitmqCluster rabbitmq-cluster --type merge -p='{"spec":{"persistence":{"storage":"15Gi"}}}' rabbitmqcluster.rabbitmq.com/rabbitmq-cluster patched 2023-03-07 10:00:29.782679 wait until statefulset rabbitmq-cluster-server has storage size 15Gi... wait takes 30.645445 seconds 2023-03-07 10:01:00.428316 wait for final grace period 80 seconds Traceback (most recent call last): File "/Users/jshajigeorge/work/sieve/sieve.py", line 657, in run_test run_workload(test_context) File "/Users/jshajigeorge/work/sieve/sieve.py", line 550, in run_workload os.killpg(streaming.pid, signal.SIGTERM) PermissionError: [Errno 1] Operation not permitted Total time: 254.97307181358337 seconds Please refer to sieve_test_results/rabbitmq-operator-resize-pvc-rabbitmq-operator-intermediate-state-1.yaml.json for more detailed information